PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Overview

Generalization in Dexterous Manipulation via
Geometry-Aware Multi-Task Learning

[Project Page] [Paper]

Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1, Deepak Pathak3

1University of California, Berkeley, 2Google Brain, 3Carnegie Mellon University

This is a PyTorch implementation of our Geometry-Aware Multi-Task Policy. The codebase also includes a suite of dexterous manipulation environments with 114 diverse real-world objects built upon Gym and MuJoCo.

We show that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse real-world objects and generalize to new objects with unseen shape or size. Interestingly, we find that multi-task learning with object point cloud representations not only generalizes better but even outperforms the single-object specialist policies on both training as well as held-out test objects.

If you find this work useful in your research, please cite using the following BibTeX:

@article{huang2021geometry,
  title={Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning},
  author={Huang, Wenlong and Mordatch, Igor and Abbeel, Pieter and Pathak, Deepak},
  journal={arXiv preprint arXiv:2111.03062},
  year={2021}
}

Setup

Requirements

Setup Instructions

git clone https://github.com/huangwl18/geometry-dex.git
cd geometry-dex/
conda create --name geometry-dex-env python=3.6.9
conda activate geometry-dex-env
pip install --upgrade pip
pip install -r requirements.txt
bash install-baselines.sh

Running Code

Below are some flags and parameters for run_ddpg.py that you may find useful for reference:

Flags and Parameters Description
--expID <INT> Experiment ID
--train_names <List of STRING> list of environments for training; separated by space
--test_names <List of STRING> list of environments for zero-shot testing; separated by space
--point_cloud Use geometry-aware policy
--pointnet_load_path <INT> Experiment ID from which to load the pre-trained Pointnet; required for --point_cloud
--video_count <INT> Number of videos to generate for each env per cycle; only up to 1 is currently supported; 0 to disable
--n_test_rollouts <INT> Total number of collected rollouts across all train + test envs for each evaluation run; should be multiple of len(train_names) + len(test_names)
--num_rollouts <INT> Total number of collected rollouts across all train envs for 1 training cycle; should be multiple of len(train_names)
--num_parallel_envs <INT> Number of parallel envs to create for vec_env; should be multiple of len(train_names)
--chunk_size <INT> Number of parallel envs asigned to each worker in SubprocChunkVecEnv; 0 to disable and use SubprocVecEnv
--num_layers <INT> Number of layers in MLP for all policies
--width <INT> Width of each layer in MLP for all policies
--seed <INT> seed for Gym, PyTorch and NumPy
--eval Perform only evaluation using latest checkpoint
--load_path <INT> Experiment ID from which to load the checkpoint for DDPG; required for --eval

The code also uses WandB. You may wish to run wandb login in terminal to record to your account or choose to run anonymously.

WARNING: Due to the large number of total environments, generating videos during training can be slow and memory intensive. You may wish to train the policy without generating videos by passing video_count=0. After training completes, simply run run_ddpg.py with flags --eval and --video_count=1 to visualize the policy. See example below.

Training

To train Vanilla Multi-Task DDPG policy:

python run_ddpg.py --expID 1 --video_count 0 --n_cycles 40000 --chunk 10

To train Geometry-Aware Multi-Task DDPG policy, first pretrain PointNet encoder:

python train_pointnet.py --expID 2

Then train the policy:

python run_ddpg.py --expID 3 --video_count 0 --n_cycles 40000 --chunk 10 --point_cloud --pointnet_load_path 2 --no_save_buffer

Note we don't save replay buffer here because it is slow as it contains sampled point clouds. If you wish to resume training in the future, do not pass --no_save_buffer above.

Evaluation / Visualization

To evaluate a trained policy and generate video visualizations, run the same command used to train the policy but with additional flags --eval --video_count=<VIDEO_COUNT> --load_path=<LOAD_EXPID>. Replace <VIDEO_COUNT> with 1 if you wish to enable visualization and 0 otherwise. Replace <LOAD_EXPID> with the Experiment ID of the trained policy. For a Geometry-Aware Multi-Task DDPG policy trained using above command, run the following for evaluation and visualization:

python run_ddpg.py --expID 4 --video_count 1 --n_cycles 40000 --chunk 10 --point_cloud --pointnet_load_path 2 --no_save_buffer --eval --load_path 3

Trained Models

We will be releasing trained model files for our Geometry-Aware Policy and single-task oracle policies for each individual object. Stay tuned! Early access can be requested via email.

Provided Environments

Training Envs

e_toy_airplane

knife

flat_screwdriver

elephant

apple

scissors

i_cups

cup

foam_brick

pudding_box

wristwatch

padlock

power_drill

binoculars

b_lego_duplo

ps_controller

mouse

hammer

f_lego_duplo

piggy_bank

can

extra_large_clamp

peach

a_lego_duplo

racquetball

tuna_fish_can

a_cups

pan

strawberry

d_toy_airplane

wood_block

small_marker

sugar_box

ball

torus

i_toy_airplane

chain

j_cups

c_toy_airplane

airplane

nine_hole_peg_test

water_bottle

c_cups

medium_clamp

large_marker

h_cups

b_colored_wood_blocks

j_lego_duplo

f_toy_airplane

toothbrush

tennis_ball

mug

sponge

k_lego_duplo

phillips_screwdriver

f_cups

c_lego_duplo

d_marbles

d_cups

camera

d_lego_duplo

golf_ball

k_toy_airplane

b_cups

softball

wine_glass

chips_can

cube

master_chef_can

alarm_clock

gelatin_box

h_lego_duplo

baseball

light_bulb

banana

rubber_duck

headphones

i_lego_duplo

b_toy_airplane

pitcher_base

j_toy_airplane

g_lego_duplo

cracker_box

orange

e_cups
Test Envs

rubiks_cube

dice

bleach_cleanser

pear

e_lego_duplo

pyramid

stapler

flashlight

large_clamp

a_toy_airplane

tomato_soup_can

fork

cell_phone

m_lego_duplo

toothpaste

flute

stanford_bunny

a_marbles

potted_meat_can

timer

lemon

utah_teapot

train

g_cups

l_lego_duplo

bowl

door_knob

mustard_bottle

plum

Acknowledgement

The code is adapted from this open-sourced implementation of DDPG + HER. The object meshes are from the YCB Dataset and the ContactDB Dataset. We use SubprocChunkVecEnv from this pull request of OpenAI Baselines to speedup vectorized environments.

Owner
Wenlong Huang
Undergraduate Student @ UC Berkeley
Wenlong Huang
Code for the paper "Unsupervised Contrastive Learning of Sound Event Representations", ICASSP 2021.

Unsupervised Contrastive Learning of Sound Event Representations This repository contains the code for the following paper. If you use this code or pa

Eduardo Fonseca 81 Dec 22, 2022
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Dec 27, 2022
A Python package to create, run, and post-process MODFLOW-based models.

Version 3.3.5 — release candidate Introduction FloPy includes support for MODFLOW 6, MODFLOW-2005, MODFLOW-NWT, MODFLOW-USG, and MODFLOW-2000. Other s

388 Nov 29, 2022
Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions"

ModelNet-C Code for the paper "Benchmarking and Analyzing Point Cloud Classification under Corruptions". For the latest updates, see: sites.google.com

Jiawei Ren 45 Dec 28, 2022
Set of models for classifcation of 3D volumes

Classification models 3D Zoo - Keras and TF.Keras This repository contains 3D variants of popular CNN models for classification like ResNets, DenseNet

69 Dec 28, 2022
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
code for paper "Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning" by Zhongzheng Ren*, Raymond A. Yeh*, Alexander G. Schwing.

Not All Unlabeled Data are Equal: Learning to Weight Data in Semi-supervised Learning Overview This code is for paper: Not All Unlabeled Data are Equa

Jason Ren 22 Nov 23, 2022
Specification language for generating Generalized Linear Models (with or without mixed effects) from conceptual models

tisane Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships TL;DR: Analysts can use Tisane to author gener

Eunice Jun 11 Nov 15, 2022
Exploring Versatile Prior for Human Motion via Motion Frequency Guidance (3DV2021)

Exploring Versatile Prior for Human Motion via Motion Frequency Guidance [Video Demo] [Paper] Installation Requirements Python 3.6 PyTorch 1.1.0 Pleas

Jiachen Xu 19 Oct 28, 2022
Provide partial dates and retain the date precision through processing

Prefix date parser This is a helper class to parse dates with varied degrees of precision. For example, a data source might state a date as 2001, 2001

Friedrich Lindenberg 13 Dec 14, 2022
A task-agnostic vision-language architecture as a step towards General Purpose Vision

Towards General Purpose Vision Systems By Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem Overview Welcome to the official code base f

AI2 79 Dec 23, 2022
A library for graph deep learning research

Documentation | Paper [JMLR] | Tutorials | Benchmarks | Examples DIG: Dive into Graphs is a turnkey library for graph deep learning research. Why DIG?

DIVE Lab, Texas A&M University 1.3k Jan 01, 2023
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 01, 2023
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 06, 2023
PyTorch source code for Distilling Knowledge by Mimicking Features

LSHFM.detection This is the PyTorch source code for Distilling Knowledge by Mimicking Features. And this project contains code for object detection wi

Guo-Hua Wang 4 Dec 17, 2022
Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations

Consumer Fairness in Recommender Systems: Contextualizing Definitions and Mitigations This is the repository for the paper Consumer Fairness in Recomm

7 Nov 30, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

Introduction This is a Python package available on PyPI for NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pyto

Artit 'Art' Wangperawong 5 Sep 29, 2021
Attention-based Transformation from Latent Features to Point Clouds (AAAI 2022)

Attention-based Transformation from Latent Features to Point Clouds This repository contains a PyTorch implementation of the paper: Attention-based Tr

12 Nov 11, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
Gif-caption - A straightforward GIF Captioner written in Python

Broksy's GIF Captioner Have you ever wanted to easily caption a GIF without havi

3 Apr 09, 2022