Skip to content

trunghieu-tran/Transfer-Learning-in-Reinforcement-Learning

Repository files navigation

Transfer-Learning-in-Reinforcement-Learning

Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations

Final Report

Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations

Cite this work

@misc{beck2022transfer,
    title={Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations},
    author={Nathan Beck and Abhiramon Rajasekharan and Hieu Tran},
    year={2022},
    eprint={2202.02442},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Project description

Transfer learning approaches in reinforcement learning aim to assist agents in learning their target domains by leveraging the knowledge learned from other agents that have been trained on similar source domains. For example, recent research focus within this space has been placed on knowledge transfer between tasks that have different transition dynamics and reward functions; however, little focus has been placed on knowledge transfer between tasks that have different action spaces.

In this paper, we approach the task of transfer learning between domains that differ in action spaces. We present a reward shaping method based on source embedding similarity that is applicable to domains with both discrete and continuous action spaces. The efficacy of our approach is evaluated on transfer to restricted action spaces in the Acrobot-v1 and Pendulum-v0 domains (Brockman et al. 2016).

Our presentations

  • Presentation 1 here
  • Google Doc Folder here

Our Google Colab

https://colab.research.google.com/drive/1cQCV9Ko-prpB8sH6FlB4oj781On-ut_w?usp=sharing

Setup

  1. Clone our repository
  2. Install Gym

Using pip:

pip install gym

Or Building from Source

git clone https://github.com/openai/gym
cd gym
pip install -e .

How to run?

Run with python IDE

  1. Open main.py or main_multiple_run.py
  2. Modify env_name and algorithm that you want to run
  3. Modify parameters in transfer_execute function if needed
  4. Log will be printed out to the terminal and the plotting result will be shown on the new windows.

Run with Google Colab

Follow our sample in file Reward_Shaping_TL.ipynb to run your own colab.

Implemented Algorithms in Stable-Baseline3

Name Recurrent Box Discrete MultiDiscrete MultiBinary Multi Processing
A2C ✔️ ✔️ ✔️ ✔️ ✔️
DDPG ✔️
DQN ✔️
HER ✔️ ✔️
PPO ✔️ ✔️ ✔️ ✔️ ✔️
SAC ✔️
TD3 ✔️
QR-DQN1 ✔️
TQC1 ✔️
Maskable PPO1 ✔️ ✔️ ✔️ ✔️

1: Implemented in SB3 Contrib GitHub repository.

Actions gym.spaces:

  • Box: A N-dimensional box that containes every point in the action space.
  • Discrete: A list of possible actions, where each timestep only one of the actions can be used.
  • MultiDiscrete: A list of possible actions, where each timestep only one action of each discrete set can be used.
  • MultiBinary: A list of possible actions, where each timestep any of the actions can be used in any combination.

Refercences

  1. OpenAI Gym repo
  2. OpenAI Gym website
  3. Stable Baselines 3 repo
  4. Robotschool repo
  5. Gyem extension repos - This python package is an extension to OpenAI Gym for auxiliary tasks (multitask learning, transfer learning, inverse reinforcement learning, etc.)
  6. Example code of TL in DL repo
  7. Retro Contest - a transfer learning contest that measures a reinforcement learning algorithm’s ability to generalize from previous experience (hosted by OpenAI) link
  8. Rainbow: Combining Improvements in Deep Reinforcement Learning (repo), (paper)
  9. Experience replay (link)
  10. Solving RL classic control (link)

Related papers

  1. Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation (paper), (repo)
  2. Deep Transfer Reinforcement Learning for Text Summarization (paper),(repo)
  3. Using Transfer Learning Between Games to Improve Deep Reinforcement Learning Performance and Stability (paper), (poster)
  4. Multi-Source Policy Aggregation for Transfer Reinforcement Learning between Diverse Environmental Dynamics (IJCAI 2020) (paper), (repo)
  5. Using Transfer Learning Between Games to Improve Deep Reinforcement Learning Performance and Stability (paper), (poster)
  6. Deep Reinforcement Learning and Transfer Learning with Flappy Bird (paper), (poster)
  7. Decoupling Dynamics and Reward for Transfer Learning (paper), (repo)
  8. Progressive Neural Networks (paper)
  9. Deep Learning for Video Game Playing (paper)
  10. Disentangled Skill Embeddings for Reinforcement Learning (paper)
  11. Playing Atari with Deep Reinforcement Learning (paper)
  12. Dueling Network Architectures for Deep Reinforcement Learning (paper)
  13. ACTOR-MIMIC DEEP MULTITASK AND TRANSFER REINFORCEMENT LEARNING (paper)
  14. DDPG (link)

Contributors

  1. Nathan Beck nathan.beck@utdallas.edu
  2. Abhiramon Rajasekharan abhiramon.rajasekharan@utdallas.edu
  3. Trung Hieu Tran trunghieu.tran@utdallas.edu

About

Transfer Learning in Reinforcement Learning using Stable-Baseline3 | Transfer Reinforcement Learning for Differing Action Spaces via Q-Network Representations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published