A Real-World Benchmark for Reinforcement Learning based Recommender System

Related tags

Deep LearningRL4RS
Overview

RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System

License

License

RL4RS is a real-world deep reinforcement learning recommender system benchmark for practitioners and researchers.

import gym
from rl4rs.env.slate import SlateRecEnv, SlateState

sim = SlateRecEnv(config, state_cls=SlateState)
env = gym.make('SlateRecEnv-v0', recsim=sim)
for i in range(epoch):
    obs = env.reset()
    for j in range(config["max_steps"]):
        action = env.offline_action
        next_obs, reward, done, info = env.step(action)
        if done[0]:
            break

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

Paper: https://arxiv.org/pdf/2110.11073.pdf

Kaggle Competition (old version): https://www.kaggle.com/c/bigdata2021-rl-recsys/overview

Resource Page: https://fuxi-up-research.gitbook.io/fuxi-up-challenges/

key features

Real-World Datasets

  • two real-world datasets: Besides the artificial datasets or semi-simulated datasets, RL4RS collects the raw logged data from one of the most popular games released by NetEase Game, which is naturally a sequential decision-making problem.
  • data understanding tool: RL4RS provides a data understanding tool for testing the proper use of RL on recommendation system datasets.
  • advanced dataset setting: RL4RS provides the separated data before and after reinforcement learning deployment for each dataset, which can simulate the difficulties to train a good RL policy from the dataset collected by SL-based algorithm.

Practical RL Baselines

  • model-free RL: RL4RS supports state-of-the-art RL libraries, such as RLlib and Tianshou. We provide the example codes of state-of-the-art model-free algorithms (A2C, PPO, etc.) implemented by RLlib library on both discrete and continue (combining policy gradients with a K-NN search) RL4RS environment.
  • offline RL: RL4RS implements offline RL algorithms including BC, BCQ and CQL through d3rlpy library. RL4RS is also the first to report the effectiveness of offline RL algorithms (BCQ and CQL) in RL-based RS domain.
  • RL-based RS baselines: RL4RS implements some algorithms proposed in the RL-based RS domain, including Exact-k and Adversarial User Model.
  • offline RL evaluation: In addition to the reward indicator and traditional RL evaluation setting (train and test on the same environment), RL4RS try to provide a complete evaluation framework by placing more emphasis on counterfactual policy evaluation.

🔰 Easy-To-Use scaleable API

  • low coupling structure: RL4RS specifies a fixed data format to reduce code coupling. And the data-related logics are unified into data preprocessing scripts or user-defined state classes.
  • file-based RL environment: RL4RS implements a file-based gym environment, which enables random sampling and sequential access to datasets exceeding memory size. It is easy to extend it to distributed file systems.
  • http-based vector Env: RL4RS naturally supports Vector Env, that is, the environment processes batch data at one time. We further encapsulate the env through the HTTP interface, so that it can be deployed on multiple servers to accelerate the generation of samples.

experimental features (welcome contributions!)

  • A new dataset for bundle recommendation with variable discounts, flexible recommendation trigger, and modifiable item content is in prepare.
  • Take raw feature rather than hidden layer embedding as observation input for offline RL
  • Model-based RL Algorithms
  • Reward-oriented simulation environment construction
  • reproduce more algorithms (RL models, safe exploration techniques, etc.) proposed in RL-based RS domain
  • Support Parametric-Action DQN, in which we input concatenated state-action pairs and output the Q-value for each pair.

installation

RL4RS supports Linux, at least 64 GB Mem !!

Github (recommended)

$ git clone https://github.com/fuxiAIlab/RL4RS
$ export PYTHONPATH=$PYTHONPATH:`pwd`/rl4rs
$ conda env create -f environment.yml
$ conda activate rl4rs

Dataset Download (Google Driver)

Dataset Download: https://drive.google.com/file/d/1YbPtPyYrMvMGOuqD4oHvK0epDtEhEb9v/view?usp=sharing

.
|-- batchrl
|   |-- BCQ_SeqSlateRecEnv-v0_b_all.h5
|   |-- BCQ_SlateRecEnv-v0_a_all.h5
|   |-- BC_SeqSlateRecEnv-v0_b_all.h5
|   |-- BC_SlateRecEnv-v0_a_all.h5
|   |-- CQL_SeqSlateRecEnv-v0_b_all.h5
|   `-- CQL_SlateRecEnv-v0_a_all.h5
|-- data_understanding_tool
|   |-- dataset
|   |   |-- ml-25m.zip
|   |   `-- yoochoose-clicks.dat.zip
|   `-- finetuned
|       |-- movielens.csv
|       |-- movielens.h5
|       |-- recsys15.csv
|       |-- recsys15.h5
|       |-- rl4rs.csv
|       `-- rl4rs.h5
|-- exactk
|   |-- exact_k.ckpt.10000.data-00000-of-00001
|   |-- exact_k.ckpt.10000.index
|   `-- exact_k.ckpt.10000.meta
|-- ope
|   `-- logged_policy.h5
|-- raw_data
|   |-- item_info.csv
|   |-- rl4rs_dataset_a_rl.csv
|   |-- rl4rs_dataset_a_sl.csv
|   |-- rl4rs_dataset_b_rl.csv
|   `-- rl4rs_dataset_b_sl.csv
`-- simulator
    |-- finetuned
    |   |-- simulator_a_dien
    |   |   |-- checkpoint
    |   |   |-- model.data-00000-of-00001
    |   |   |-- model.index
    |   |   `-- model.meta
    |   `-- simulator_b2_dien
    |       |-- checkpoint
    |       |-- model.data-00000-of-00001
    |       |-- model.index
    |       `-- model.meta
    |-- rl4rs_dataset_a_shuf.csv
    `-- rl4rs_dataset_b3_shuf.csv

two ways to use this resource

Reinforcement Learning Only

# move simulator/*.csv to rl4rs/dataset
# move simulator/finetuned/* to rl4rs/output
cd reproductions/
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

start from scratch (batch-rl, environment simulation, etc.)

cd reproductions/
# first step, generate tfrecords for supervised learning (environment simulation) 
# is time-consuming, you can annotate them firstly.
bash run_split.sh

# environment simulation part (need tfrecord)
# run these scripts to compare different SL methods
bash run_supervised_item.sh dnn/widedeep/dien/lstm
bash run_supervised_slate.sh dnn_slate/adversarial_slate/etc.
# or you can directly train DIEN-based simulator as RL Env.
bash run_simulator_train.sh dien

# model-free part (need run_simulator_train.sh)
# run exact-k
bash run_exact_k.sh
# start http-based Env, then run RLlib library
nohup python -u rl4rs/server/gymHttpServer.py &
bash run_modelfree_rl.sh DQN/PPO/DDPG/PG/PG_conti/etc.

# offline RL part (need run_simulator_train.sh)
# generate offline dataset for offline RL first (dataset_generate stage)
# generate offline dataset for offline RL first (train stage)
bash run_batch_rl.sh BC/BCQ/CQL

reported baselines

algorithm category support mode
Wide&Deep supervised learning item-wise classification/slate-wise classification/item ranking
GRU4Rec supervised learning item-wise classification/slate-wise classification/item ranking
DIEN supervised learning item-wise classification/slate-wise classification/item ranking
Adversarial User Model supervised learning item-wise classification/slate-wise classification/item ranking
Exact-K model-free learning discrete env & hidden state as observation
Policy Gredient (PG) model-free RL model-free learning
Deep Q-Network (DQN) model-free RL discrete env & raw feature/hidden state as observation
Deep Deterministic Policy Gradients (DDPG) model-free RL conti env & raw feature/hidden state as observation
Asynchronous Actor-Critic (A2C) model-free RL discrete/conti env & raw feature/hidden state as observation
Proximal Policy Optimization (PPO) model-free RL discrete/conti env & raw feature/hidden state as observation
Behavior Cloning supervised learning/Offline RL discrete env & hidden state as observation
Batch Constrained Q-learning (BCQ) Offline RL discrete env & hidden state as observation
Conservative Q-Learning (CQL) Offline RL discrete env & hidden state as observation

supported algorithms (from RLlib and d3rlpy)

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning)
Deep Q-Network (DQN)
Double DQN
Rainbow
PPO
A2C A3C
IMPALA
Deep Deterministic Policy Gradients (DDPG)
Twin Delayed Deep Deterministic Policy Gradients (TD3)
Soft Actor-Critic (SAC)
Batch Constrained Q-learning (BCQ)
Bootstrapping Error Accumulation Reduction (BEAR)
Advantage-Weighted Regression (AWR)
Conservative Q-Learning (CQL)
Advantage Weighted Actor-Critic (AWAC)
Critic Reguralized Regression (CRR)
Policy in Latent Action Space (PLAS)
TD3+BC

examples

See script/ and reproductions/.

RLlib examples: https://docs.ray.io/en/latest/rllib-examples.html

d3rlpy examples: https://d3rlpy.readthedocs.io/en/v1.0.0/

reproductions

See reproductions/.

bash run_xx.sh ${param}
experiment in the paper shell script optional param. description
Sec.3 run_split.sh - dataset split/shuffle/align(for datasetB)/to tfrecord
Sec.4 run_mdp_checker.sh recsys15/movielens/rl4rs unzip ml-25m.zip and yoochoose-clicks.dat.zip into dataset/
Sec.5.1 run_supervised_item.sh dnn/widedeep/lstm/dien Table 5. Item-wise classification
Sec.5.1 run_supervised_slate.sh dnn_slate/widedeep_slate/lstm_slate/dien_slate/adversarial_slate Table 5. Item-wise rank
Sec.5.1 run_supervised_slate.sh dnn_slate_multiclass/widedeep_slate_multiclass/lstm_slate_multiclass/dien_slate_multiclass Table 5. Slate-wise classification
Sec.5.1 & Sec.6 run_simulator_train.sh dien dien-based simulator for different trainsets
Sec.5.1 & Sec.6 run_simulator_eval.sh dien Table 6.
Sec.5.1 & Sec.6 run_modelfree_rl.sh PG/DQN/A2C/PPO/IMPALA/DDPG/*_conti Table 7.
Sec.5.2 & Sec.6 run_batch_rl.sh BC/BCQ/CQL Table 8.
Sec.5.1 run_exact_k.sh - Exact-k
- run_simulator_env_test.sh - examining the consistency of features (observations) between RL env and supervised simulator

contributions

Any kind of contribution to RL4RS would be highly appreciated! Please contact us by email.

community

Channel Link
Materials Google Drive
Email Mail
Issues GitHub Issues
Fuxi Team Fuxi HomePage
Our Team Open-project

citation

@article{2021RL4RS,
title={RL4RS: A Real-World Benchmark for Reinforcement Learning based Recommender System},
author={ Kai Wang and Zhene Zou and Yue Shang and Qilin Deng and Minghao Zhao and Runze Wu and Xudong Shen and Tangjie Lyu and Changjie Fan},
journal={ArXiv},
year={2021},
volume={abs/2110.11073}
}
You might also like...
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

RoboDesk A Multi-Task Reinforcement Learning Benchmark
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

The Unsupervised Reinforcement Learning Benchmark (URLB)

The Unsupervised Reinforcement Learning Benchmark (URLB) URLB provides a set of leading algorithms for unsupervised reinforcement learning where agent

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Product-based-recommendation-system - A product based recommendation system which uses Machine learning algorithm such as KNN and cosine similarity
Real-world Anomaly Detection in Surveillance Videos- pytorch Re-implementation

Real world Anomaly Detection in Surveillance Videos : Pytorch RE-Implementation This repository is a re-implementation of "Real-world Anomaly Detectio

Comments
  • No Appendix in origin paper

    No Appendix in origin paper

    Thanks for this repo! I find the section 4.2 of the paper says that we can know more about data details in Appendix C, and the section 5.1 says that more details about the environment simulation model are shown in Appendix D. However, I can't find any appendix in the paper from this url shown in the repo. Maybe forget to add appendix to the paper? Or where can I find all the appendix? ~~ Thanks again!

    opened by Zessay 1
  • ConnectionResetError(104, 'Connection reset by peer'))

    ConnectionResetError(104, 'Connection reset by peer'))

    I'm sorry that there is a program error report and I would like to ask you for advice. When running bash run_modelfree_rl.sh DQN, a connection error occurs. The error message is as follows:

    2022-11-15 08:19:12,029 INFO replay_buffer.py:46 -- Estimated max memory usage for replay buffer is 0.4361 GB (100000.0 batches of size 1, 4361 bytes each), available system memory is 201.44095232 GB 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/kernel:0' shape=(256, 64) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_1/bias:0' shape=(64,) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/kernel:0' shape=(64, 284) dtype=float32> 2022-11-15 08:19:14,843 INFO tf_policy.py:712 -- Optimizing variable <tf.Variable 'default_policy/fc_out/bias:0' shape=(284,) dtype=float32> 2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:143 -- Training on concatenated sample batches:

    { 'inputs': [ np.ndarray((576, 540), dtype=float32, min=-1.0, max=37.179, mean=-0.169), np.ndarray((576, 540), dtype=float32, min=-1.0, max=38.907, mean=-0.207), np.ndarray((576,), dtype=int64, min=1.0, max=283.0, mean=103.844), np.ndarray((576,), dtype=float32, min=0.0, max=162.121, mean=7.551), np.ndarray((576,), dtype=bool, min=0.0, max=1.0, mean=0.135), np.ndarray((576,), dtype=float64, min=1.0, max=1.0, mean=1.0)], 'placeholders': [ <tf.Tensor 'default_policy/obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/new_obs:0' shape=(?, 540) dtype=float32>, <tf.Tensor 'default_policy/action:0' shape=(?,) dtype=int64>, <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=float32>, <tf.Tensor 'default_policy/weights:0' shape=(?,) dtype=float32>], 'state_inputs': []}

    2022-11-15 08:19:14,846 INFO multi_gpu_impl.py:188 -- Divided 576 rollout sequences, each of length 1, among 1 devices. Traceback (most recent call last): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 438, in _error_catcher yield File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 519, in read data = self._fp.read(amt) if not fp_closed else b"" File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 463, in read n = self.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/http/client.py", line 507, in readinto n = self.fp.readinto(b) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/socket.py", line 586, in readinto return self._sock.recv_into(b) ConnectionResetError: [Errno 104] Connection reset by peer

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 760, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream data = self.read(amt=amt, decode_content=decode_content) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 541, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/contextlib.py", line 99, in exit self.gen.throw(type, value, traceback) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/urllib3/response.py", line 455, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "modelfree_train.py", line 429, in result = trainer.train() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 643, in train raise e File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 629, in train result = Trainable.train(self) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/tune/trainable.py", line 237, in train result = self.step() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 170, in step res = next(self.train_exec_impl) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 843, in apply_filter for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 1075, in build_union item = next(it) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 756, in next return next(self.built_iterator) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/util/iter.py", line 783, in apply_foreach for item in it: File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/execution/rollout_ops.py", line 75, in sampler yield workers.local_worker().sample() File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 739, in sample batches = [self.input_reader.next()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 101, in next batches = [self.get_data()] File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 231, in get_data item = next(self.rollout_provider) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 615, in _env_runner sample_collector=sample_collector, File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 934, in _process_observations env_id) File "/home/wlxy/anaconda3/envs/rl4rs/lib/python3.6/site-packages/ray/rllib/env/base_env.py", line 368, in try_reset return {_DUMMY_AGENT_ID: self.vector_env.reset_at(env_id)} File "/home/wlxy/userfolder/RL4RS/rl4rs/utils/rllib_vector_env.py", line 44, in reset_at self.reset_cache = self.env.reset() File "/home/wlxy/userfolder/RL4RS/rl4rs/server/httpEnv.py", line 43, in reset observation = self.client.env_reset(self.instance_id) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 67, in env_reset resp = self._post_request(route, None) File "/home/wlxy/userfolder/RL4RS/rl4rs/server/gymHttpClient.py", line 43, in _post_request data=json.dumps(data)) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 577, in post return self.request('POST', url, data=data, json=json, **kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/home/wlxy/.local/lib/python3.6/site-packages/requests/sessions.py", line 687, in send r.content File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 838, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File "/home/wlxy/.local/lib/python3.6/site-packages/requests/models.py", line 763, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))

    I would like to ask for your help, thank you very much.

    opened by hubin111 3
  • Problems about TensorFlow version and killed error

    Problems about TensorFlow version and killed error

    I reproduced run_batch_rl according to the guidelines but the errors are as follows.

    `WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/tensorflow_core/python/ops/rnn_cell_impl.py:575: calling Zeros.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:From /root/miniconda3/envs/rl4rs/lib/python3.6/site-packages/deepctr/contrib/rnn.py:257: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.keras.backend.get_session is deprecated. Please use tf.compat.v1.keras.backend.get_session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/nets/dien.py:43: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:124: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:125: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

    WARNING:tensorflow:From /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/base.py:129: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

    /mnt/rl4rs_pro/RL4RS/RL4RS/script/rl4rs/env/slate.py:279: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray complete_states = np.array(samples.get_complete_states()) run_batch_rl.sh: line 82: 180 Killed python -u batchrl_train.py $algo 'dataset_generate' "{'env':'SlateRecEnv-v0','iteminfo_file':'${rl4rs_dataset_dir}/item_info.csv','sample_file':'${rl4rs_dataset_dir}/rl4rs_dataset_a_shuf.csv','model_file':'${rl4rs_output_dir}/simulator_a_dien/model','trial_name':'a_all'}"`

    First it seems to be some warnings with the TensorFlow version, my own version is 1.15.0, I checked the environment file that what it need is also 1.15.0. I tried other versions such as 1.14.0 and 2.0.0 but still failed. However actually they are just warnings but not errors, so I don't know if I do have to use another version. And another problem is that finally it reported killed and aborted.

    opened by Heth0531 2
Releases(v1.1.0)
Source code for Zalo AI 2021 submission

zalo_ltr_2021 Source code for Zalo AI 2021 submission Solution: Pipeline We use the pipepline in the picture below: Our pipeline is combination of BM2

128 Dec 27, 2022
Official Implementation of SWAGAN: A Style-based Wavelet-driven Generative Model

Official Implementation of SWAGAN: A Style-based Wavelet-driven Generative Model SWAGAN: A Style-based Wavelet-driven Generative Model Rinon Gal, Dana

55 Dec 06, 2022
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
Attack on Confidence Estimation algorithm from the paper "Disrupting Deep Uncertainty Estimation Without Harming Accuracy"

Attack on Confidence Estimation (ACE) This repository is the official implementation of "Disrupting Deep Uncertainty Estimation Without Harming Accura

3 Mar 30, 2022
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
Python scripts for performing stereo depth estimation using the HITNET Tensorflow model.

HITNET-Stereo-Depth-estimation Python scripts for performing stereo depth estimation using the HITNET Tensorflow model from Google Research. Stereo de

Ibai Gorordo 76 Jan 02, 2023
OneShot Learning-based hotword detection.

EfficientWord-Net Hotword detection based on one-shot learning Home assistants require special phrases called hotwords to get activated (eg:"ok google

ANT-BRaiN 102 Dec 25, 2022
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
2D Human Pose estimation using transformers. Implementation in Pytorch

PE-former: Pose Estimation Transformer Vision transformer architectures perform very well for image classification tasks. Efforts to solve more challe

Panteleris Paschalis 23 Oct 17, 2022
Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Movement Primitives Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This

DFKI Robotics Innovation Center 63 Jan 06, 2023
GE2340 project source code without credentials.

GE2340-Project-Public GE2340 project source code without credentials. Run the bot.py to start the bot Telegram: @jasperwong_ge2340_bot If the bot does

0 Feb 10, 2022
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention

E(n)-Equivariant Transformer (wip) Implementation of E(n)-Equivariant Transformer, which extends the ideas from Welling's E(n)-Equivariant G

Phil Wang 132 Jan 02, 2023
Tensorflow Implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (ICML 2017 workshop)

tf-SNDCGAN Tensorflow implementation of the paper "Spectral Normalization for Generative Adversarial Networks" (https://www.researchgate.net/publicati

Nhat M. Nguyen 248 Nov 25, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection, CVPR 2021. Installation A Linux pla

Tianning Yuan 269 Dec 21, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
Python Single Object Tracking Evaluation

pysot-toolkit The purpose of this repo is to provide evaluation API of Current Single Object Tracking Dataset, including VOT2016 VOT2018 VOT2018-LT OT

348 Dec 22, 2022
Codes for "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation"

CSDI This is the github repository for the NeurIPS 2021 paper "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation

106 Jan 04, 2023
The Official Implementation of the ICCV-2021 Paper: Semantically Coherent Out-of-Distribution Detection.

SCOOD-UDG (ICCV 2021) This repository is the official implementation of the paper: Semantically Coherent Out-of-Distribution Detection Jingkang Yang,

Jake YANG 62 Nov 21, 2022
Wikidated : An Evolving Knowledge Graph Dataset of Wikidata’s Revision History

Wikidated Wikidated 1.0 is a dataset of Wikidata’s full revision history, which encodes changes between Wikidata revisions as sets of deletions and ad

Lukas Schmelzeisen 11 Aug 16, 2022
利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023