Manipulation OpenAI Gym environments to simulate robots at the STARS lab

Overview

Manipulator Learning

This repository contains a set of manipulation environments that are compatible with OpenAI Gym and simulated in pybullet. In particular, we have a set of environments with a simulated version of our lab's mobile manipulator, the Thing, containing a UR10 mounted on a Ridgeback base, as well as a set of environments using a table-mounted Franka Emika Panda.

The package currently contains variations of the following tasks:

  • Reach
  • Lift
  • Stack
  • Pick and Place
  • Sort
  • Insert
  • Pick and Insert
  • Door Open
  • Play (multitask)

Requirements

  • python (3.7+)
  • pybullet
  • numpy
  • gym
  • transforms3d
  • Pillow (for rendering)
  • liegroups

Installation

git clone https://github.com/utiasSTARS/manipulator-learning
cd manipulator-learning && pip install .

Usage

The easiest way to use environments in this repository is to import the whole envs module and then initialize using getattr. For example, to load our Panda Play environment with the insertion tray:

import manipulator_learning.sim.envs as manlearn_envs
env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')()

obs = env.reset()
next_obs, rew, done, info = env.step(env.action_space.sample())

You can also easily initialize the environment with a wide variety of different keyword arguments, e.g:

env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')(main_task='stack_01')

Image environments

All environments that are suffixed with Image or Multiview produce observations that contain RGB and depth images as well as numerical proprioceptive data. Here is an example of how you can access each type of data in these environments:

obs = env.reset()
img = obs['img']
depth = obs['depth']
proprioceptive = obs['obs']

By default, all image based environments render headlessly using EGL, but if you want to render the full pybullet GUI, you can using the render_opengl_gui and egl flags like this:

env = getattr(manlearn_envs, 'PandaPlayInsertTrayXYZState')(render_opengl_gui=True, egl=False)

Environment Details

Thing (mobile manipulator) environments

Our mobile manipulation environments were primarily designed to allow base position changes between task episodes, but don't actually allow movement during an episode. For this reason, many included environments include both an Image version and a Multiview version, where all observation and control parameters are identical, except that the base is fixed in the Image version, and the base moves (between episodes) in the Multiview version. See, for example, manipulator_learning/sim/envs/thing_door.py.

Panda Environments

Our panda environments contain several of the same tasks as our Thing environments. Additionally, we have a set of "play" environments that are multi-task.

Current environment list

['PandaPlayXYZState', 
'PandaPlayInsertTrayXYZState', 
'PandaPlayInsertTrayDPGripXYZState', 
'PandaPlayInsertTrayPlusPickPlaceXYZState', 
'PandaLiftXYZState', 
'PandaBringXYZState', 
'PandaPickAndPlaceAirGoal6DofState', 
'PandaReachXYZState', 
'PandaStackXYZState',
'ThingInsertImage', 
'ThingInsertMultiview', 
'ThingPickAndInsertSucDoneImage', 
'ThingPickAndInsertSucDoneMultiview',
'ThingPickAndPlaceXYState', 
'ThingPickAndPlacePrevPosXYState', 
'ThingPickAndPlaceGripPosXYState', 
'ThingPickAndPlaceXYZState', 
'ThingPickAndPlaceGripPosXYZState', 
'ThingPickAndPlaceAirGoalXYZState', 
'ThingPickAndPlace6DofState', 
'ThingPickAndPlace6DofLongState', 
'ThingPickAndPlace6DofSmallState', 
'ThingPickAndPlaceAirGoal6DofState', 
'ThingBringXYZState',
'ThingLiftXYZStateMultiview',
'ThingLiftXYZState', 
'ThingLiftXYZMultiview', 
'ThingLiftXYZImage', 
'ThingPickAndPlace6DofSmallImage', 
'ThingPickAndPlace6DofSmall160120Image', 
'ThingPickAndPlace6DofSmallMultiview', 
'ThingSort2Multiview', 
'ThingSort3Multiview', 
'ThingPushingXYState', 
'ThingPushingXYImage', 
'ThingPushing6DofMultiview', 
'ThingReachingXYState', 
'ThingReachingXYImage', 
'ThingStackImage', 
'ThingStackMultiview', 
'ThingStackSmallMultiview', 
'ThingStackSameMultiview', 
'ThingStackSameMultiviewV2', 
'ThingStackSameImageV2', 
'ThingStack3Multiview', 
'ThingStackTallMultiview', 
'ThingDoorImage', 
'ThingDoorMultiview']

Roadmap

  • Make environment generation compatible with gym.make
  • Documentation for environments and options for customization
  • Add imitation learning/data collection code
  • Fix bug that timesteps remaining on rendered window takes an extra step to update
Owner
STARS Laboratory
We are the Space and Terrestrial Autonomous Robotic Systems Laboratory at the University of Toronto
STARS Laboratory
Production First and Production Ready End-to-End Speech Recognition Toolkit

WeNet 中文版 Discussions | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models We share neural Net together. The main motivation of WeN

2.7k Jan 04, 2023
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 09, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
Official Implementation for HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing

HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing Yuval Alaluf*, Omer Tov*, Ron Mokady, Rinon Gal, Amit H. Bermano *Denotes equ

885 Jan 06, 2023
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains This is an accompanying repository to the ICAIL 2021 pap

4 Dec 16, 2021
Open-World Entity Segmentation

Open-World Entity Segmentation Project Website Lu Qi*, Jason Kuen*, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, Jiaya Jia This projec

DV Lab 410 Jan 03, 2023
An open framework for Federated Learning.

Welcome to Intel® Open Federated Learning Federated learning is a distributed machine learning approach that enables organizations to collaborate on m

Intel Corporation 397 Dec 27, 2022
[NeurIPS 2021] “Improving Contrastive Learning on Imbalanced Data via Open-World Sampling”,

Improving Contrastive Learning on Imbalanced Data via Open-World Sampling Introduction Contrastive learning approaches have achieved great success in

VITA 24 Dec 17, 2022
COLMAP - Structure-from-Motion and Multi-View Stereo

COLMAP About COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.

4.7k Jan 07, 2023
Loopy belief propagation for factor graphs on discrete variables, in JAX!

PGMax implements general factor graphs for discrete probabilistic graphical models (PGMs), and hardware-accelerated differentiable loopy belief propagation (LBP) in JAX.

Vicarious 62 Dec 23, 2022
Chunkmogrify: Real image inversion via Segments

Chunkmogrify: Real image inversion via Segments Teaser video with live editing sessions can be found here This code demonstrates the ideas discussed i

David Futschik 112 Jan 04, 2023
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022
Cupytorch - A small framework mimics PyTorch using CuPy or NumPy

CuPyTorch CuPyTorch是一个小型PyTorch,名字来源于: 不同于已有的几个使用NumPy实现PyTorch的开源项目,本项目通过CuPy支持

Xingkai Yu 23 Aug 17, 2022
Code for "Retrieving Black-box Optimal Images from External Databases" (WSDM 2022)

Retrieving Black-box Optimal Images from External Databases (WSDM 2022) We propose how a user retreives an optimal image from external databases of we

joisino 5 Apr 13, 2022
Tutorial to set up TensorFlow Object Detection API on the Raspberry Pi

A tutorial showing how to set up TensorFlow's Object Detection API on the Raspberry Pi

Evan 1.1k Dec 26, 2022
Code for Fold2Seq paper from ICML 2021

[ICML2021] Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design Environment file: environment.yml Data and Feat

International Business Machines 43 Dec 04, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Wider-Yolo Kütüphanesi ile Yüz Tespit Uygulamanı Yap

WIDER-YOLO : Yüz Tespit Uygulaması Yap Wider-Yolo Kütüphanesinin Kullanımı 1. Wider Face Veri Setini İndir Train Dataset Val Dataset Test Dataset Not:

Kadir Nar 6 Aug 22, 2022
The open source code of SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation.

SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation(ICPR 2020) Overview This code is for the paper: Spatial Attention U-Net for Retinal V

Changlu Guo 151 Dec 28, 2022