[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Overview

Robot Action Primitives (RAPS)

This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives (RAPS).

[Project Website]

Murtaza Dalal, Deepak Pathak*, Ruslan Salakhutdinov*
(* equal advising)

CMU

alt text

If you find this work useful in your research, please cite:

@inproceedings{dalal2021raps,
    Author = {Dalal, Murtaza and Pathak, Deepak and
              Salakhutdinov, Ruslan},
    Title = {Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives},
    Booktitle = {NeurIPS},
    Year = {2021}
}

Requirements

To install dependencies, please run the following commands:

sudo apt-get update
sudo apt-get install curl \
    git \
    libgl1-mesa-dev \
    libgl1-mesa-glx \
    libglew-dev \
    libosmesa6-dev \
    software-properties-common \
    net-tools \
    unzip \
    vim \
    virtualenv \
    wget \
    xpra \
    xserver-xorg-dev
sudo apt-get install libglfw3-dev libgles2-mesa-dev patchelf
sudo mkdir /usr/lib/nvidia-000

Please add the following to your bashrc:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco200/bin
export MUJOCO_GL='egl'
export MKL_THREADING_LAYER=GNU
export D4RL_SUPPRESS_IMPORT_ERROR='1'
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia-000

To install python requirements:

conda create -n raps python=3.7
conda activate raps
./setup_python_env.sh <absolute path to raps>

Training and Evaluation

Kitchen

Prior to running any experiments, make sure to run cd /path/to/raps/rlkit

single task env names:

  • microwave
  • kettle
  • slide_cabinet
  • hinge_cabinet
  • light_switch
  • top_left_burner

multi task env names:

  • microwave_kettle_light_top_left_burner //Sequential Multi Task 1
  • hinge_slide_bottom_left_burner_light //Sequential Multi Task 2

To train RAPS with Dreamer on any single task kitchen environment, run:

python experiments/kitchen/dreamer/dreamer_v2_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with Dreamer on the multi task kitchen environments, run:

python experiments/kitchen/dreamer/dreamer_v2_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with Dreamer on any kitchen environment

python experiments/kitchen/dreamer/dreamer_v2_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with RAD on any single task kitchen environment

python experiments/kitchen/rad/rad_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with RAD on any multi task kitchen environment

python experiments/kitchen/rad/rad_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with RAD on any kitchen environment

python experiments/kitchen/rad/rad_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with PPO on any single task kitchen environment

python experiments/kitchen/ppo/ppo_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train RAPS with PPO on any multi task kitchen environment

python experiments/kitchen/ppo/ppo_multi_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with PPO on any kitchen environment

python experiments/kitchen/ppo/ppo_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

Metaworld

single task env names

  • drawer-close-v2
  • soccer-v2
  • peg-unplug-side-v2
  • sweep-into-v2
  • assembly-v2
  • disassemble-v2

To train RAPS with Dreamer on any metaworld environment

python experiments/metaworld/dreamer/dreamer_v2_single_task_primitives.py --mode here_no_doodad --exp_prefix <> --env <env name>

To train Raw Actions with Dreamer on any metaworld environment

python experiments/metaworld/dreamer/dreamer_v2_single_task_raw_actions.py --mode here_no_doodad --exp_prefix <> --env <env name>

Robosuite

To train RAPS with Dreamer on an Robosuite Lift

python experiments/robosuite/dreamer/dreamer_v2_single_task_primitives_lift.py --mode here_no_doodad --exp_prefix <>

To train Raw Actions with Dreamer on an Robosuite Lift

python experiments/robosuite/dreamer/dreamer_v2_single_task_raw_actions_lift.py --mode here_no_doodad --exp_prefix <>

To train RAPS with Dreamer on an Robosuite Door

python experiments/robosuite/dreamer/dreamer_v2_single_task_primitives_door.py --mode here_no_doodad --exp_prefix <>

To train Raw Actions with Dreamer on an Robosuite Door

python experiments/robosuite/dreamer/dreamer_v2_single_task_raw_actions_door.py --mode here_no_doodad --exp_prefix <>

Learning Curve visualization

cd /path/to/raps/rlkit
python ../viskit/viskit/frontend.py data/<exp_prefix> //open localhost:5000 to view
Owner
Murtaza Dalal
Passionate about Machine Learning, Computer Vision, Robotics, and AI. Interested in seamlessly integrating software and hardware into into intelligent systems.
Murtaza Dalal
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
Ensembling Off-the-shelf Models for GAN Training

Vision-aided GAN video (3m) | website | paper Can the collective knowledge from a large bank of pretrained vision models be leveraged to improve GAN t

345 Dec 28, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
A hue shift helper for OBS

obs-hue-shift A hue shift helper for OBS This is a repo based on the really nice script Hegemege made. The original script can be found https://gist.g

Alexis Tyler 1 Jan 10, 2022
Official code for paper "Demystifying Local Vision Transformer: Sparse Connectivity, Weight Sharing, and Dynamic Weight"

Demysitifing Local Vision Transformer, arxiv This is the official PyTorch implementation of our paper. We simply replace local self attention by (dyna

138 Dec 28, 2022
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 226 Dec 29, 2022
Code accompanying the paper Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs (Chen et al., CVPR 2020, Oral).

Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs This repository contains PyTorch implementation of our pa

Shizhe Chen 178 Dec 29, 2022
Decoding the Protein-ligand Interactions Using Parallel Graph Neural Networks

Decoding the Protein-ligand Interactions Using Parallel Graph Neural Networks Requirements python 0.10+ rdkit 2020.03.3.0 biopython 1.78 openbabel 2.4

Neeraj Kumar 3 Nov 23, 2022
ICLR 2021 i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning

Introduction PyTorch code for the ICLR 2021 paper [i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning]. @inproceedings{lee2021i

Kibok Lee 68 Nov 27, 2022
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 126 Jan 06, 2023
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
The Noise Contrastive Estimation for softmax output written in Pytorch

An NCE implementation in pytorch About NCE Noise Contrastive Estimation (NCE) is an approximation method that is used to work around the huge computat

Kaiyu Shi 287 Nov 25, 2022
Code for Motion Representations for Articulated Animation paper

Motion Representations for Articulated Animation This repository contains the source code for the CVPR'2021 paper Motion Representations for Articulat

Snap Research 851 Jan 09, 2023
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
Pytorch Implementation of LNSNet for Superpixel Segmentation

LNSNet Overview Official implementation of Learning the Superpixel in a Non-iterative and Lifelong Manner (CVPR'21) Learning Strategy The proposed LNS

42 Oct 11, 2022
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

Miaomiao Li 82 Jan 02, 2023
Pytorch implementation of OCNet series and SegFix.

openseg.pytorch News 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details. 2021/08/13 We have released the implemen

openseg-group 1.1k Dec 23, 2022
Exploring the Dual-task Correlation for Pose Guided Person Image Generation

Dual-task Pose Transformer Network The source code for our paper "Exploring Dual-task Correlation for Pose Guided Person Image Generation“ (CVPR2022)

63 Dec 15, 2022