Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Overview

Manipulator Learning

This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In particular, we have a set of environments with a simulated version of our lab's mobile manipulator, the Thing, containing a UR10 mounted on a Ridgeback base, as well as a set of environments using a table-mounted Franka Emika Panda.

The package currently contains variations of the following tasks:

  • Reach
  • Lift
  • Stack
  • Pick and Place
  • Sort
  • Insert
  • Pick and Insert
  • Door Open
  • Play (multitask)

Requirements

  • python (3.7+)
  • pybullet
  • numpy
  • gym
  • transforms3d
  • Pillow (for rendering)
  • liegroups

Installation

git clone https://github.com/utiasSTARS/manipulator-learning
cd manipulator-learning && pip install .

Usage

The easiest way to use environments in this repository is to import the whole envs module and then initialize using getattr. For example, to load our Panda Play environment with the insertion tray:

import manipulator_learning.sim.envs as manlearn_envs
env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')()

obs = env.reset()
next_obs, rew, done, info = env.step(env.action_space.sample())

You can also easily initialize the environment with a wide variety of different keyword arguments, e.g:

env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')(main_task='stack_01')

Image environments

All environments that are suffixed with Image or Multiview produce observations that contain RGB and depth images as well as numerical proprioceptive data. Here is an example of how you can access each type of data in these environments:

obs = env.reset()
img = obs['img']
depth = obs['depth']
proprioceptive = obs['obs']

By default, all image based environments render headlessly using EGL, but if you want to render the full pybullet GUI, you can using the render_opengl_gui and egl flags like this:

env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')(render_opengl_gui=True, egl=False)

Environment Details

Thing (mobile manipulator) environments

Our mobile manipulation environments were primarily designed to allow base position changes between task episodes, but don't actually allow movement during an episode. For this reason, many included environments include both an Image version and a Multiview version, where all observation and control parameters are identical, except that the base is fixed in the Image version, and the base moves (between episodes) in the Multiview version. See, for example, manipulator_learning/sim/envs/thing_door.py.

Panda Environments

Our panda environments contain several of the same tasks as our Thing environments. Additionally, we have a set of "play" environments that are multi-task.

Current environment list

['PandaPlayXYZState', 
'PandaPlayInsertTrayXYZState', 
'PandaPlayInsertTrayDPGripXYZState', 
'PandaPlayInsertTrayPlusPickPlaceXYZState', 
'PandaLiftXYZState', 
'PandaBringXYZState', 
'PandaPickAndPlaceAirGoal6DofState', 
'PandaReachXYZState', 
'PandaStackXYZState',
'ThingInsertImage', 
'ThingInsertMultiview', 
'ThingPickAndInsertSucDoneImage', 
'ThingPickAndInsertSucDoneMultiview',
'ThingPickAndPlaceXYState', 
'ThingPickAndPlacePrevPosXYState', 
'ThingPickAndPlaceGripPosXYState', 
'ThingPickAndPlaceXYZState', 
'ThingPickAndPlaceGripPosXYZState', 
'ThingPickAndPlaceAirGoalXYZState', 
'ThingPickAndPlace6DofState', 
'ThingPickAndPlace6DofLongState', 
'ThingPickAndPlace6DofSmallState', 
'ThingPickAndPlaceAirGoal6DofState', 
'ThingBringXYZState',
'ThingLiftXYZStateMultiview',
'ThingLiftXYZState', 
'ThingLiftXYZMultiview', 
'ThingLiftXYZImage', 
'ThingPickAndPlace6DofSmallImage', 
'ThingPickAndPlace6DofSmall160120Image', 
'ThingPickAndPlace6DofSmallMultiview', 
'ThingSort2Multiview', 
'ThingSort3Multiview', 
'ThingPushingXYState', 
'ThingPushingXYImage', 
'ThingPushing6DofMultiview', 
'ThingReachingXYState', 
'ThingReachingXYImage', 
'ThingStackImage', 
'ThingStackMultiview', 
'ThingStackSmallMultiview', 
'ThingStackSameMultiview', 
'ThingStackSameMultiviewV2', 
'ThingStackSameImageV2', 
'ThingStack3Multiview', 
'ThingStackTallMultiview', 
'ThingDoorImage', 
'ThingDoorMultiview']

Roadmap

  • Make environment generation compatible with gym.make
  • Documentation for environments and options for customization
  • Add imitation learning/data collection code
  • Fix bug that timesteps remaining on rendered window takes an extra step to update
Owner
STARS Laboratory
We are the Space and Terrestrial Autonomous Robotic Systems Laboratory at the University of Toronto
STARS Laboratory
Diffgram - Supervised Learning Data Platform

Data Annotation, Data Labeling, Annotation Tooling, Training Data for Machine Learning

Diffgram 1.6k Jan 07, 2023
Image Fusion Transformer

Image-Fusion-Transformer Platform Python 3.7 Pytorch =1.0 Training Dataset MS-COCO 2014 (T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ram

Vibashan VS 68 Dec 23, 2022
Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021

ATISS: Autoregressive Transformers for Indoor Scene Synthesis This repository contains the code that accompanies our paper ATISS: Autoregressive Trans

138 Dec 22, 2022
FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS

FaceAPI AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using

Vladimir Mandic 395 Dec 29, 2022
Display, filter and search log messages in your terminal

Textualog Display, filter and search logging messages in the terminal. This project is powered by rich and textual. Some of the ideas and code in this

Rik Huygen 24 Dec 10, 2022
On-device speech-to-intent engine powered by deep learning

Rhino Made in Vancouver, Canada by Picovoice Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a giv

Picovoice 510 Dec 30, 2022
MODNet: Trimap-Free Portrait Matting in Real Time

MODNet is a model for real-time portrait matting with only RGB image input.

Zhanghan Ke 2.8k Dec 30, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
Pytorch implementation of "ARM: Any-Time Super-Resolution Method"

ARM-Net Dependencies Python 3.6 Pytorch 1.7 Results Train Data preprocessing cd data_scripts python extract_subimages_test.py python data_augmentation

Bohong Chen 55 Nov 24, 2022
PyTorch implementation of 'Gen-LaneNet: a generalized and scalable approach for 3D lane detection'

(pytorch) Gen-LaneNet: a generalized and scalable approach for 3D lane detection Introduction This is a pytorch implementation of Gen-LaneNet, which p

Yuliang Guo 233 Jan 06, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
Automatic caption evaluation metric based on typicality analysis.

SeMantic and linguistic UndeRstanding Fusion (SMURF) Automatic caption evaluation metric described in the paper "SMURF: SeMantic and linguistic UndeRs

Joshua Feinglass 6 Jan 09, 2022
Code for CVPR2021 "Visualizing Adapted Knowledge in Domain Transfer". Visualization for domain adaptation. #explainable-ai

Visualizing Adapted Knowledge in Domain Transfer @inproceedings{hou2021visualizing, title={Visualizing Adapted Knowledge in Domain Transfer}, auth

Yunzhong Hou 80 Dec 25, 2022
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Implementation of SSMF: Shifting Seasonal Matrix Factorization

SSMF Implementation of SSMF: Shifting Seasonal Matrix Factorization, Koki Kawabata, Siddharth Bhatia, Rui Liu, Mohit Wadhwa, Bryan Hooi. NeurIPS, 2021

Koki Kawabata 9 Jun 10, 2022
IAUnet: Global Context-Aware Feature Learning for Person Re-Identification

IAUnet This repository contains the code for the paper: IAUnet: Global Context-Aware Feature Learning for Person Re-Identification Ruibing Hou, Bingpe

30 Jul 14, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Code accompanying the paper "Wasserstein GAN"

Wasserstein GAN Code accompanying the paper "Wasserstein GAN" A few notes The first time running on the LSUN dataset it can take a long time (up to an

3.1k Jan 01, 2023
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022