Dynamic Environments with Deformable Objects (DEDO)

Related tags

Deep Learningdedo
Overview

DEDO  - Dynamic Environments with Deformable Objects

DEDO - Dynamic Environments with Deformable Objects

DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed for researchers in the machine learning, reinforcement learning, robotics and computer vision communities. The suite provides a set of every day tasks that involve deformables, such as hanging cloth, dressing a person, and buttoning buttons. We provide examples for integrating two popular reinforcement learning libraries: StableBaselines3 and RLlib. We also provide reference implementaionts for training a various Variational Autoencoder variants with our environment. DEDO is easy to set up and has few dependencies, it is highly parallelizable and supports a wide range of customizations: loading custom objects and textures, adjusting material properties.

<<<<<<< HEAD

Note: updates for this repo are in progress (until the presentation at NeurIPS2021 in mid-December).

@inproceedings{dedo2021,
  title={Dynamic Environments with Deformable Objects},
  author={Rika Antonova and Peiyang Shi and Hang Yin and Zehang Weng and Danica Kragic},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track},
  year={2021},
}

d221b6994e8189457ea6f0513e6807824d11bb29 Table of Contents:
Installation
GettingStarted
Tasks
Use with RL
Use with VAE
Customization

Please refer to Wiki for the full documentation

Installation

Optional initial step: create a new conda environment with conda create --name dedo python=3.7 and activate it with conda activate dedo. Conda is not strictly needed, alternatives like virtualenv can be used; a direct install without using virtual environments is ok as well.

git clone https://github.com/contactrika/dedo
cd dedo
pip install numpy  # important: Nessasary for compiling numpy-enabled PyBullet
pip install -e .

Python3.7 is recommended as we have encountered that on some OS + CPU combo, PyBullet could not be compiled with Numpy enabled in Pip Python 3.8. To enable recording/logging videos install ffmpeg:

sudo apt-get install ffmpeg

See more in Installation Guide in wiki

Getting started

To get started, one can run one of the following commands to visualize the tasks through a hard-coded policy.

python -m dedo.demo --env=HangGarment-v1 --viz --debug
  • dedo.demo is the demo module
  • --env=HangGarment-v1 specifies the environment
  • --viz enables the GUI
  • ---debug outputs additional information in the console
  • --cam_resolution 400 specifies the size of the output window

See more in Usage-guide

Tasks

See more in Task Overview

We provide a set of 10 tasks involving deformable objects, most tasks contains 5 handmade deformable objects. There are also two procedurally generated tasks, ButtonProc and HangProcCloth, in which the deformable objects are procedurally generated. Furthermore, to improve generalzation, the v0 of each task will randomizes textures and meshes.

All tasks have -v1 and -v2 with a particular choice of meshes and textures that is not randomized. Most tasks have versions up to -v5 with additional mesh and texture variations.

Tasks with procedurally generated cloth (ButtonProc and HangProcCloth) generate random cloth objects for all versions (but randomize textures only in v0).

HangBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangBag-v1 --viz

HangBag-v0: selects one of 108 bag meshes; randomized textures

HangBag-v[1-3]: three bag versions with textures shown below:

images/imgs/hang_bags_annotated.jpg

HangGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangGarment-v1 --viz

HangGarment-v0: hang garment with randomized textures (a few examples below):

HangGarment-v[1-5]: 5 apron meshes and texture combos shown below:

images/imgs/hang_garments_5.jpg

HangGarment-v[6-10]: 5 shirt meshes and texture combos shown below:

images/imgs/hang_shirts_5.jpg

HangProcCloth

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=HangProcCloth-v1 --viz

HangProcCloth-v0: random textures, procedurally generated cloth with 1 and 2 holes.

HangProcCloth-v[1-2]: same, but with either 1 or 2 holes

images/imgs/hang_proc_cloth.jpg

Buttoning

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Button-v1 --viz

ButtonProc-v0: randomized textures and procedurally generated cloth with 2 holes, randomized hole/button positions.

ButtonProc-v[1-2]: procedurally generated cloth, 1 or two holes.

images/imgs/button_proc.jpg

Button-v0: randomized textures, but fixed cloth and button positions.

Button-v1: fixed cloth and button positions with one texture (see image below):

images/imgs/button.jpg

Hoop

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Hoop-v1 --viz

Hoop-v0: randomized textures Hoop-v1: pre-selected textures images/imgs/hoop_and_lasso.jpg

Lasso

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=Lasso-v1 --viz

Lasso-v0: randomized textures Lasso-v1: pre-selected textures

DressBag

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressBag-v1 --viz

DressBag-v0, DressBag-v[1-5]: demo for -v1 shown below

images/imgs/dress_bag.jpg

Visualizations of the 5 backpack mesh and texture variants for DressBag-v[1-5]:

images/imgs/backpack_meshes.jpg

DressGarment

images/gifs/HangGarment-v1.gif

python -m dedo.demo_preset --env=DressGarment-v1 --viz

DressGarment-v0, DressGarment-v[1-5]: demo for -v1 shown below

images/imgs/dress_garment.jpg

Mask

python -m dedo.demo_preset --env=Mask-v1 --viz

Mask-v0, Mask-v[1-5]: a few texture variants shown below: images/imgs/dress_garment.jpg

RL Examples

dedo/run_rl_sb3.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm using RLLib:

python -m dedo.run_rllib --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

For documentation, please refer to Arguments Reference page in wiki

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

SVAE Examples

dedo/run_svae.py gives an example of how to train various flavors of VAE:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

dedo/run_rllib.py gives an example of how to train an RL algorithm from Stable Baselines 3:

python -m dedo.run_rl_sb3 --env=HangGarment-v0 \
    --logdir=/tmp/dedo --num_play_runs=3 --viz --debug

To launch the Tensorboard:

tensorboard --logdir=/tmp/dedo --bind_all --port 6006 \
  --samples_per_plugin images=1000

Customization

To load custom object you would first have to fill an entry in DEFORM_INFO in task_info.py. The key should the the .obj file path relative to data/:

DEFORM_INFO = {
...
    # An example of info for a custom item.
    'bags/custom.obj': {
        'deform_init_pos': [0, 0.47, 0.47],
        'deform_init_ori': [np.pi/2, 0, 0],
        'deform_scale': 0.1,
        'deform_elastic_stiffness': 1.0,
        'deform_bending_stiffness': 1.0,
        'deform_true_loop_vertices': [
            [0, 1, 2, 3]  # placeholder, since we don't know the true loops
        ]
    },

Then you can use --override_deform_obj flag:

python -m dedo.demo --env=HangBag-v0 --cam_resolution 200 --viz --debug \
    --override_deform_obj bags/custom.obj

For items not in DEFORM_DICT you will need to specify sensible defaults, for example:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
  --override_deform_obj=generated_cloth/generated_cloth.obj \
  --deform_init_pos 0.02 0.41 0.63 --deform_init_ori 0 0 1.5708

Example of scaling up the custom mesh objects:

python -m dedo.demo --env=HangGarment-v0 --viz --debug \
   --override_deform_obj=generated_cloth/generated_cloth.obj \
   --deform_init_pos 0.02 0.41 0.55 --deform_init_ori 0 0 1.5708 \
   --deform_scale 2.0 --anchor_init_pos -0.10 0.40 0.70 \
   --other_anchor_init_pos 0.10 0.40 0.70

See more in Customization Wiki

Additonal Assets

BGarment dataset is adapter from Berkeley Garment Library

Sewing dataset is adapted from Generating Datasets of 3D Garments with Sewing Patterns

You might also like...
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

PyTorch implementation of Deformable Convolution
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

MoCoPnet - Deformable 3D Convolution for Video Super-Resolution
MoCoPnet - Deformable 3D Convolution for Video Super-Resolution

Deformable 3D Convolution for Video Super-Resolution Pytorch implementation of l

3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

Selfplay In MultiPlayer Environments
Selfplay In MultiPlayer Environments

This project allows you to train AI agents on custom-built multiplayer environments, through self-play reinforcement learning.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.
Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy.

Deploy tensorflow graphs for fast evaluation and export to tensorflow-less environments running numpy. Now with tensorflow 1.0 support. Evaluation usa

Comments
  • Adding Point Cloud Observations to DEDO

    Adding Point Cloud Observations to DEDO

    This PR adds point cloud (pcd) rendering to the DEDO. Summary of changes:

    • Point cloud data extracted from sim environment based on a set of object ids that we want to retain
    • Depth cameras are instantiated using a cameraConfig class, which abstracts out the various camera configurations needed.
    • The cameraConfig class loads camera configs from JSON (for easy loading & sharing of camera configs), or directly by instantiation (if you know how you want to dynamically set your camera).
    • Some sample JSON camera configs are provided (4 total)
    • Unprojecting from depth image to point cloud is vectorized, so rendering point cloud observations adds negligible runtime to overall pipeline process time (should benchmark this?).
    • The original deform_env had to be adjusted so that the deformable object would have ID 0. For some reason, pybullet only renders the deformable if this is true.

    Known issues:

    • The floor has disappeared from the visual.
    opened by edwin-pan 3
  • Enables base motion on fetch robot with 1 anchor

    Enables base motion on fetch robot with 1 anchor

    Changes allow the fetch robot to move towards the hanger with an apron.

    Google Doc that explains the changes: https://docs.google.com/document/d/18_9K29K4N6atvtqUxIqKhq6Bt0YSPhQgfWldUWdyvLM/edit?usp=sharing

    There are some TODO's: related to removing some hardcoded values and improving the results

    opened by Nishantjannu 0
Releases(v0.1)
  • v0.1(Jan 11, 2022)

    This is the initial release of the code and functionality presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks in December 2021.

    Source code(tar.gz)
    Source code(zip)
Owner
Rika
Sim-to-real with Reinforcement Learning, Variational Inference, Bayesian Optimization
Rika
NeuroGen: activation optimized image synthesis for discovery neuroscience

NeuroGen: activation optimized image synthesis for discovery neuroscience NeuroGen is a framework for synthesizing images that control brain activatio

3 Aug 17, 2022
使用深度学习框架提取视频硬字幕;docker容器免安装深度学习库,使用本地api接口使得界面和后端识别分离;

extract-video-subtittle 使用深度学习框架提取视频硬字幕; 本地识别无需联网; CPU识别速度可观; 容器提供API接口; 运行环境 本项目运行环境非常好搭建,我做好了docker容器免安装各种深度学习包; 提供windows界面操作; 容器为CPU版本; 视频演示 https

歌者 16 Aug 06, 2022
NEO: Non Equilibrium Sampling on the orbit of a deterministic transform

NEO: Non Equilibrium Sampling on the orbit of a deterministic transform Description of the code This repo describes the NEO estimator described in the

0 Dec 01, 2021
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
A repository for interferometer controller code.

dses-interferometer-controller A repository for interferometer controller code, hardware, and simulations. See dses.science for more information on th

Eli Reed 1 Jan 17, 2022
Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics

Interaction-Network-Pytorch Pytorch Implementraion of Interaction Networks for Learning about Objects, Relations and Physics. Interaction Network is a

117 Nov 05, 2022
DUE: End-to-End Document Understanding Benchmark

This is the repository that provide tools to download data, reproduce the baseline results and evaluation. What can you achieve with this guide Based

21 Dec 29, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL: Graph Contrastive Learning for PyTorch PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL com

GCL: Graph Contrastive Learning Library for PyTorch 594 Jan 08, 2023
IOT: Instance-wise Layer Reordering for Transformer Structures

Introduction This repository contains the code for Instance-wise Ordered Transformer (IOT), which is introduced in the ICLR2021 paper IOT: Instance-wi

IOT 19 Nov 15, 2022
Deep Residual Networks with 1K Layers

Deep Residual Networks with 1K Layers By Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Microsoft Research Asia (MSRA). Table of Contents Introduc

Kaiming He 856 Jan 06, 2023
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

37 Dec 04, 2022
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context Code in both PyTorch and TensorFlow

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context This repository contains the code in both PyTorch and TensorFlow for our paper

Zhilin Yang 3.3k Jan 06, 2023
PyTorch Connectomics: segmentation toolbox for EM connectomics

Introduction The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individua

Zudi Lin 132 Dec 26, 2022
An official source code for "Augmentation-Free Self-Supervised Learning on Graphs"

Augmentation-Free Self-Supervised Learning on Graphs An official source code for Augmentation-Free Self-Supervised Learning on Graphs paper, accepted

Namkyeong Lee 59 Dec 01, 2022
This is the workbook I created while I was studying for the Qiskit Associate Developer exam. I hope this becomes useful to others as it was for me :)

A Workbook for the Qiskit Developer Certification Exam Hello everyone! This is Bartu, a fellow Qiskitter. I have recently taken the Certification exam

Bartu Bisgin 66 Dec 10, 2022
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 04, 2023
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
Voxel-based Network for Shape Completion by Leveraging Edge Generation (ICCV 2021, oral)

Voxel-based Network for Shape Completion by Leveraging Edge Generation This is the PyTorch implementation for the paper "Voxel-based Network for Shape

10 Dec 04, 2022
YoloAll is a collection of yolo all versions. you you use YoloAll to test yolov3/yolov5/yolox/yolo_fastest

官方讨论群 QQ群:552703875 微信群:15158106211(先加作者微信,再邀请入群) YoloAll项目简介 YoloAll是一个将当前主流Yolo版本集成到同一个UI界面下的推理预测工具。可以迅速切换不同的yolo版本,并且可以针对图片,视频,摄像头码流进行实时推理,可以很方便,直观

DL-Practise 244 Jan 01, 2023
Multivariate Time Series Transformer, public version

Multivariate Time Series Transformer Framework This code corresponds to the paper: George Zerveas et al. A Transformer-based Framework for Multivariat

363 Jan 03, 2023