A Real-World Benchmark for Reinforcement Learning based Recommender System

Related tags

Deep LearningRL4RS
Overview

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System

License

License

RL4RS is a real-world deep reinforcement learning recommender system benchmark for practitioners and researchers.

import gym
from rl4rs.env.slate import SlateRecEnv, SlateState

sim = SlateRecEnv(config, state_cls=SlateState)
env = gym.make('SlateRecEnv-v0', recsim=sim)
for i in range(epoch):
    obs = env.reset()
    for j in range(config["max_steps"]):
        action = env.offline_action
        next_obs, reward, done, info = env.step(action)
        if done[0]:
            break

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

Paper: https://arxiv.org/pdf/2110.11073.pdf

Kaggle Competition (old version): https://www.kaggle.com/c/bigdata2021-rl-recsys/overview

Resource Page: https://fuxi-up-research.gitbook.io/fuxi-up-challenges/

key features

⭐ Real-World Datasets

  • two real-world datasets: Besides the artificial datasets or semi-simulated datasets, RL4RS collects the raw logged data from one of the most popular games released by NetEase Game, which is naturally a sequential decision-making problem.
  • data understanding tool: RL4RS provides a data understanding tool for testing the proper use of RL on recommendation system datasets.
  • advanced dataset setting: RL4RS provides the separated data before and after reinforcement learning deployment for each dataset, which can simulate the difficulties to train a good RL policy from the dataset collected by SL-based algorithm.

⚑ Practical RL Baselines

  • model-free RL: RL4RS supports state-of-the-art RL libraries, such as RLlib and Tianshou. We provide the example codes of state-of-the-art model-free algorithms (A2C, PPO, etc.) implemented by RLlib library on both discrete and continue (combining policy gradients with a K-NN search) RL4RS environment.
  • offline RL: RL4RS implements offline RL algorithms including BC, BCQ and CQL through d3rlpy library. RL4RS is also the first to report the effectiveness of offline RL algorithms (BCQ and CQL) in RL-based RS domain.
  • RL-based RS baselines: RL4RS implements some algorithms proposed in the RL-based RS domain, including Exact-k and Adversarial User Model.
  • offline RL evaluation: In addition to the reward indicator and traditional RL evaluation setting (train and test on the same environment), RL4RS try to provide a complete evaluation framework by placing more emphasis on counterfactual policy evaluation.

πŸ”° Easy-To-Use scaleable API

  • low coupling structure: RL4RS specifies a fixed data format to reduce code coupling. And the data-related logics are unified into data preprocessing scripts or user-defined state classes.
  • file-based RL environment: RL4RS implements a file-based gym environment, which enables random sampling and sequential access to datasets exceeding memory size. It is easy to extend it to distributed file systems.
  • http-based vector Env: RL4RS naturally supports Vector Env, that is, the environment processes batch data at one time. We further encapsulate the env through the HTTP interface, so that it can be deployed on multiple servers to accelerate the generation of samples.

experimental features (welcome contributions!)

  • A new dataset for bundle recommendation with variable discounts, flexible recommendation trigger, and modifiable item content is in prepare.
  • Take raw feature rather than hidden layer embedding as observation input for offline RL
  • Model-based RL Algorithms
  • Reward-oriented simulation environment construction
  • reproduce more algorithms (RL models, safe exploration techniques, etc.) proposed in RL-based RS domain
  • Support Parametric-Action DQN, in which we input concatenated state-action pairs and output the Q-value for each pair.

installation

RL4RS supports Linux, at least 64 GB Mem !!

Github (recommended)

$ git clone https://github.com/fuxiAIlab/RL4RS
$ export PYTHONPATH=$PYTHONPATH:`pwd`/rl4rs
$ conda env create -f environment.yml
$ conda activate rl4rs

Dataset Download (Google Driver)

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

.
|-- batchrl
|   |-- BCQ_SeqSlateRecEnv-v0_b_all.h5
|   |-- BCQ_SlateRecEnv-v0_a_all.h5
|   |-- BC_SeqSlateRecEnv-v0_b_all.h5
|   |-- BC_SlateRecEnv-v0_a_all.h5
|   |-- CQL_SeqSlateRecEnv-v0_b_all.h5
|   `-- CQL_SlateRecEnv-v0_a_all.h5
|-- data_understanding_tool
|   |-- dataset
|   |   |-- ml-25m.zip
|   |   `-- yoochoose-clicks.dat.zip
|   `-- finetuned
|       |-- movielens.csv
|       |-- movielens.h5
|       |-- recsys15.csv
|       |-- recsys15.h5
|       |-- rl4rs.csv
|       `-- rl4rs.h5
|-- exactk
|   |-- exact_k.ckpt.10000.data-00000-of-00001
|   |-- exact_k.ckpt.10000.index
|   `-- exact_k.ckpt.10000.meta
|-- ope
|   `-- logged_policy.h5
|-- raw_data
|   |-- item_info.csv
|   |-- rl4rs_dataset_a_rl.csv
|   |-- rl4rs_dataset_a_sl.csv
|   |-- rl4rs_dataset_b_rl.csv
|   `-- rl4rs_dataset_b_sl.csv
`-- simulator
    |-- finetuned
    |   |-- simulator_a_dien
    |   |   |-- checkpoint
    |   |   |-- model.data-00000-of-00001
    |   |   |-- model.index
    |   |   `-- model.meta
    |   `-- simulator_b2_dien
    |       |-- checkpoint
    |       |-- model.data-00000-of-00001
    |       |-- model.index
    |       `-- model.meta
    |-- rl4rs_dataset_a_shuf.csv
    `-- rl4rs_dataset_b3_shuf.csv

two ways to use this resource

Reinforcement Learning Only

# move simulator/*.csv to rl4rs/dataset
# move simulator/finetuned/* to rl4rs/output
cd reproductions/
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

start from scratch (batch-rl, environment simulation, etc.)

cd reproductions/
# first step, generate tfrecords for supervised learning (environment simulation) 
# is time-consuming, you can annotate them firstly.
bash run_split.sh

# environment simulation part (need tfrecord)
# run these scripts to compare different SL methods
bash run_supervised_item.sh dnn/widedeep/dien/lstm
bash run_supervised_slate.sh dnn_slate/adversarial_slate/etc.
# or you can directly train DIEN-based simulator as RL Env.
bash run_simulator_train.sh dien

# model-free part (need run_simulator_train.sh)
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

# offline RL part (need run_simulator_train.sh)
# generate offline dataset for offline RL first (dataset_generate stage)
# generate offline dataset for offline RL first (train stage)
bash run_batch_rl.sh BC/BCQ/CQL

reported baselines

algorithm category support mode
Wide&Deep supervised learning item-wise classification/slate-wise classification/item ranking
GRU4Rec supervised learning item-wise classification/slate-wise classification/item ranking
DIEN supervised learning item-wise classification/slate-wise classification/item ranking
Adversarial User Model supervised learning item-wise classification/slate-wise classification/item ranking
Exact-K model-free learning discrete env & hidden state as observation
Policy Gredient (PG) model-free RL model-free learning
Deep Q-Network (DQN) model-free RL discrete env & raw feature/hidden state as observation
Deep Deterministic Policy Gradients (DDPG) model-free RL conti env & raw feature/hidden state as observation
Asynchronous Actor-Critic (A2C) model-free RL discrete/conti env & raw feature/hidden state as observation
Proximal Policy Optimization (PPO) model-free RL discrete/conti env & raw feature/hidden state as observation
Behavior Cloning supervised learning/Offline RL discrete env & hidden state as observation
Batch Constrained Q-learning (BCQ) Offline RL discrete env & hidden state as observation
Conservative Q-Learning (CQL) Offline RL discrete env & hidden state as observation

supported algorithms (from RLlib and d3rlpy)

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning) βœ… βœ…
Deep Q-Network (DQN) βœ… β›”
Double DQN βœ… β›”
Rainbow βœ… β›”
PPO βœ… βœ…
A2C A3C βœ… βœ…
IMPALA βœ… βœ…
Deep Deterministic Policy Gradients (DDPG) β›” βœ…
Twin Delayed Deep Deterministic Policy Gradients (TD3) β›” βœ…
Soft Actor-Critic (SAC) βœ… βœ…
Batch Constrained Q-learning (BCQ) βœ… βœ… βœ…
Bootstrapping Error Accumulation Reduction (BEAR) β›” βœ… βœ…
Advantage-Weighted Regression (AWR) βœ… βœ… βœ…
Conservative Q-Learning (CQL) βœ… βœ… βœ…
Advantage Weighted Actor-Critic (AWAC) β›” βœ… βœ…
Critic Reguralized Regression (CRR) β›” βœ… βœ…
Policy in Latent Action Space (PLAS) β›” βœ… βœ…
TD3+BC β›” βœ… βœ…

examples

See script/ and reproductions/.

RLlib examples: https://docs.ray.io/en/latest/rllib-examples.html

d3rlpy examples: https://d3rlpy.readthedocs.io/en/v1.0.0/

reproductions

See reproductions/.

bash run_xx.sh ${param}
experiment in the paper shell script optional param. description
Sec.3 run_split.sh - dataset split/shuffle/align(for datasetB)/to tfrecord
Sec.4 run_mdp_checker.sh recsys15/movielens/rl4rs unzip ml-25m.zip and yoochoose-clicks.dat.zip into dataset/
Sec.5.1 run_supervised_item.sh dnn/widedeep/lstm/dien Table 5. Item-wise classification
Sec.5.1 run_supervised_slate.sh dnn_slate/widedeep_slate/lstm_slate/dien_slate/adversarial_slate Table 5. Item-wise rank
Sec.5.1 run_supervised_slate.sh dnn_slate_multiclass/widedeep_slate_multiclass/lstm_slate_multiclass/dien_slate_multiclass Table 5. Slate-wise classification
Sec.5.1 & Sec.6 run_simulator_train.sh dien dien-based simulator for different trainsets
Sec.5.1 & Sec.6 run_simulator_eval.sh dien Table 6.
Sec.5.1 & Sec.6 run_modelfree_rl.sh PG/DQN/A2C/PPO/IMPALA/DDPG/*_conti Table 7.
Sec.5.2 & Sec.6 run_batch_rl.sh BC/BCQ/CQL Table 8.
Sec.5.1 run_exact_k.sh - Exact-k
- run_simulator_env_test.sh - examining the consistency of features (observations) between RL env and supervised simulator

contributions

Any kind of contribution to RL4RS would be highly appreciated! Please contact us by email.

community

Channel Link
Materials Google Drive
Email Mail
Issues GitHub Issues
Fuxi Team Fuxi HomePage
Our Team Open-project

citation

@article{2021RL4RS,
title={RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System},
author={ Kai Wang and Zhene Zou and Yue Shang and Qilin Deng and Minghao Zhao and Runze Wu and Xudong Shen and Tangjie Lyu and Changjie Fan},
journal={ArXiv},
year={2021},
volume={abs/2110.11073}
}
You might also like...
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

RoboDesk A Multi-Task Reinforcement Learning Benchmark
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

The Unsupervised Reinforcement Learning Benchmark (URLB)

The Unsupervised Reinforcement Learning Benchmark (URLB) URLB provides a set of leading algorithms for unsupervised reinforcement learning where agent

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Product-based-recommendation-system - A product based recommendation system which uses Machine learning algorithm such as KNN and cosine similarity
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

Comments
  • No Appendix in origin paper

    No Appendix in origin paper

    Thanks for this repo! I find the section 4.2 of the paper says that we can know more about data details in Appendix C, and the section 5.1 says that more details about the environment simulation model are shown in Appendix D. However, I can't find any appendix in the paper from this url shown in the repo. Maybe forget to add appendix to the paper? Or where can I find all the appendix? ~~ Thanks again!

    opened by Zessay 1
  • ConnectionResetError(104, 'Connection reset by peer'))

    ConnectionResetError(104, 'Connection reset by peer'))

    I'm sorry that there is a program error report and I would like to ask you for advice. When running bash run_modelfree_rl.sh DQN, a connection error occurs. The error message is as follows:

    2022-11-15 08:19:12,029 INFO replay_buffer.py:46 -- Estimated max memory usage for replay buffer is 0.4361 GB (100000.0 batches of size 1, 4361 bytes each), available system memory is 201.44095232 GB 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/kernel:0' shape=(256, 64) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/bias:0' shape=(64,) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/kernel:0' shape=(64, 284) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/bias:0' shape=(284,) dtype=float32> 2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:143 -- Training on concatenated sample batches:

    { 'inputs': [ np.ndarray((576, 540), dtype=float32, min=-1.0, max=37.179, mean=-0.169), np.ndarray((576, 540), dtype=float32, min=-1.0, max=38.907, mean=-0.207), np.ndarray((576,), dtype=int64, min=1.0, max=283.0, mean=103.844), np.ndarray((576,), dtype=float32, min=0.0, max=162.121, mean=7.551), np.ndarray((576,), dtype=bool, min=0.0, max=1.0, mean=0.135), np.ndarray((576,), dtype=float64, min=1.0, max=1.0, mean=1.0)], 'placeholders': [ <tf.Tensor 'default_policy/obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/new_obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>, <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/weights:0' shape=(?,) dtype=float32>], 'state_inputs': []}

    2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:188 -- Divided 576 rollout sequences, each of length 1, among 1 devices. Traceback (most recent call last): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 438, in _error_catcher yield File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 519, in read data = self._fp.read(amt) if not fp_closed else b"" File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 463, in read n = self.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 507, in readinto n = self.fp.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/socket.py", line 586, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 104] Connection reset by peer

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 760, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream data = self.read(amt=amt, decode_content=decode_content) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 541, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/contextlib.py", line 99, in exit self.gen.throw(type, value, traceback) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 455, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "modelfree_train.py", line 429, in result = trainer.train() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 643, in train raise e File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 629, in train result = Trainable.train(self) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 170, in step res = next(self.train_exec_impl) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 1075, in build_union item = next(it) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/execution/rollout_ops.py", line 75, in sampler yield workers.local_worker().sample() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 739, in sample batches = [self.input_reader.next()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 101, in next batches = [self.get_data()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 231, in get_data item = next(self.rollout_provider) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 615, in _env_runner sample_collector=sample_collector, File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 934, in _process_observations env_id) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/env/base_env.py", line 368, in try_reset return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)} File "/home/wlxy/userfolder/RL4RS/rl4rs/utils/rllib_vector_env.py", line 44, in reset_at self.reset_cache = self.env.reset() File "/home/wlxy/userfolder/RL4RS/rl4rs/server/httpEnv.py", line 43, in reset observation = self.client.env_reset(self.instance_id) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 67, in env_reset resp = self._post_request(route, None) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 43, in _post_request data=json.dumps(data)) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 687, in send r.content File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 838, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 763, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    I would like to ask for your help, thank you very much.

    opened by hubin111 3
  • Problems about TensorFlow version and killed error

    Problems about TensorFlow version and killed error

    I reproduced run_batch_rl according to the guidelines but the errors are as follows.

    `WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn_cell_impl.py:575: calling Zeros.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/deepctr/contrib/rnn.py:257: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.keras.backend.get_session is deprecated. Please use tf.compat.v1.keras.backend.get_session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:124: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:125: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:129: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

    /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/slate.py:279: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray complete_states = np.array(samples.get_complete_states()) run_batch_rl.sh: line 82: 180 Killed python -u batchrl_train.py $algo 'dataset_generate' "{'env':'SlateRecEnv-v0','iteminfo_file':'${rl4rs_dataset_dir}/item_info.csv','sample_file':'${rl4rs_dataset_dir}/rl4rs_dataset_a_shuf.csv','model_file':'${rl4rs_output_dir}/simulator_a_dien/model','trial_name':'a_all'}"`

    First it seems to be some warnings with the TensorFlow version, my own version is 1.15.0, I checked the environment file that what it need is also 1.15.0. I tried other versions such as 1.14.0 and 2.0.0 but still failed. However actually they are just warnings but not errors, so I don't know if I do have to use another version. And another problem is that finally it reported killed and aborted.

    opened by Heth0531 2
Releases(v1.1.0)
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetu

3 Dec 05, 2022
Dense Gaussian Processes for Few-Shot Segmentation

DGPNet - Dense Gaussian Processes for Few-Shot Segmentation Welcome to the public repository for DGPNet. The paper is available at arxiv: https://arxi

37 Jan 07, 2023
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
Implementation of the πŸ˜‡ Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
PyTorch deep learning projects made easy.

PyTorch Template Project PyTorch deep learning project made easy. PyTorch Template Project Requirements Features Folder Structure Usage Config file fo

Victor Huang 3.8k Jan 01, 2023
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
ML course - EPFL Machine Learning Course, Fall 2021

EPFL Machine Learning Course CS-433 Machine Learning Course, Fall 2021 Repository for all lecture notes, labs and projects - resources, code templates

EPFL Machine Learning and OptimizationΒ Laboratory 1k Jan 04, 2023
Fuwa-http - The http client implementation for the fuwa eco-system

Fuwa HTTP The HTTP client implementation for the fuwa eco-system Example import

Fuwa 2 Feb 16, 2022
FMA: A Dataset For Music Analysis

FMA: A Dataset For Music Analysis MichaΓ«l Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information

MichaΓ«l Defferrard 1.8k Dec 29, 2022
Code for Paper "Evidential Softmax for Sparse MultimodalDistributions in Deep Generative Models"

Evidential Softmax for Sparse Multimodal Distributions in Deep Generative Models Abstract Many applications of generative models rely on the marginali

Stanford Intelligent Systems Laboratory 9 Jun 06, 2022
Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"

**Codebase and data are uploaded in progress. ** VOLT(-py) is a vocabulary learning codebase that allows researchers and developers to automaticaly ge

416 Jan 09, 2023
A Light in the Dark: Deep Learning Practices for Industrial Computer Vision

A Light in the Dark: Deep Learning Practices for Industrial Computer Vision This is the repository for our Paper/Contribution to the WI2022 in NΓΌrnber

Maximilian Harl 6 Jan 17, 2022
Next-gen Rowhammer fuzzer that uses non-uniform, frequency-based patterns.

Blacksmith Rowhammer Fuzzer This repository provides the code accompanying the paper Blacksmith: Scalable Rowhammering in the Frequency Domain that is

Computer Security Group @ ETH Zurich 173 Nov 16, 2022
Semantic Image Synthesis with SPADE

Semantic Image Synthesis with SPADE New implementation available at imaginaire repository We have a reimplementation of the SPADE method that is more

NVIDIA Research Projects 7.3k Jan 07, 2023
LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection

LiDAR Distillation Paper | Model LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object Detection Yi Wei, Zibu Wei, Yongming Rao, Jiax

Yi Wei 75 Dec 22, 2022
Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Hah Min Lew 1 Feb 08, 2022
BackgroundRemover lets you Remove Background from images and video with a simple command line interface

BackgroundRemover BackgroundRemover is a command line tool to remove background from video and image, made by nadermx to power https://BackgroundRemov

Johnathan Nader 1.7k Dec 30, 2022