Code for: https://berkeleyautomation.github.io/bags/

Overview

DeformableRavens

Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the project website, which also contains the data we used to train policies. Contents of this README:

Installation

This is how to get the code running on a local machine. First, get conda on the machine if it isn't there already:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Then, create a new Python 3.7 conda environment (e.g., named "py3-defs") and activate it:

conda create -n py3-defs python=3.7
conda activate py3-defs

Then install:

./install_python_ubuntu.sh

Note I: It is tested on Ubuntu 18.04. We have not tried other Ubuntu versions or other operating systems.

Note II: Installing TensorFlow using conda is usually easier than pip because the conda version will ship with the correct CUDA and cuDNN libraries, whereas the pip version is a nightmare regarding version compatibility.

Note III: the code has only been tested with PyBullet 3.0.4. In fact, there are some places which explicitly hard-code this requirement. Using later versions may work but is not recommended.

Environments and Tasks

This repository contains tasks in the ICRA 2021 submission and the predecessor paper on Transporters (presented at CoRL 2020). For the latter paper, there are (roughly) 10 tasks that came pre-shipped; the Transporters paper doesn't test with pushing or insertion-translation, but tests with all others. See Tasks.md for some task-specific documentation

Each task subclasses a Task class and needs to define its own reset(). The Task class defines an oracle policy that's used to get demonstrations (so it is not implemented within each task subclass), and is divided into cases depending on the action, or self.primitive, used.

Similarly, different tasks have different reward functions, but all are integrated into the Task super-class and divided based on the self.metric type: pose or zone.

Code Usage

Experiments start with python main.py, with --disp added for seeing the PyBullet GUI (but not used for large-scale experiments). The general logic for main.py proceeds as follows:

  • Gather expert demonstrations for the task and put it in data/{TASK}, unless there are already a sufficient amount of demonstrations. There are sub-directories for action, color, depth, info, etc., which store the data pickle files with consistent indexing per time step. Caution: this will start "counting" the data from the existing data/ directory. If you want entirely fresh data, delete the relevant file in data/.

  • Given the data, train the designated agent. The logged data is stored in logs/{AGENT}/{TASK}/{DATE}/{train}/ in the form of a tfevent file for TensorBoard. Note: it will do multiple training runs for statistical significance.

For deformables, we actually use a separate load.py script, due to some issues with creating multiple environments.

See Commands.md for commands to reproduce experimental results.

Downloading the Data

We normally generate 1000 demos for each of the tasks. However, this can take a long time, especially for the bag tasks. We have pre-generated datasets for all the tasks we tested with on the project website. Here's how to do this. For example, suppose we want to download demonstration data for the "bag-color-goal" task. Download the demonstration data from the website. Since this is also a goal-conditioned task, download the goal demonstrations as well. Make new data/ and goals/ directories and put the tar.gz files in the respective directories:

deformable-ravens/
    data/
        bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
    goals/
        bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Note: if you generate data using the main.py script, then it will automatically create the data/ scripts, and similarly for the generate_goals.py script. You only need to manually create data/ and goals/ if you only want to download and get pre-existing datasets in the right spot.

Then untar both of them in their respective directories:

tar -zxvf bag-color-goal_1000_demos_480Hz_filtered_Nov13.tar.gz
tar -zxvf bag-color-goal_20_goals_480Hz_Nov19.tar.gz

Now the data should be ready! If you want to inspect and debug the data, for example the goals data, then do:

python ravens/dataset.py --path goals/bag-color-goal/

Note that by default it saves any content in goals/ to goals_out/ and data in data/ to data_out/. Also, by default, it will download and save images. This can be very computationally intensive if you do this for the full 1000 demos. (The goals/ data only has 20 demos.) You can change this easily in the main method of ravens/datasets.py.

Running the script will print out some interesting data statistics for you.

Miscellaneous

If you have questions, please use the public issue tracker, so that all of us can benefit from your questions.

If you find this code or research paper helpful, please consider citing it:

@inproceedings{seita_bags_2021,
    author  = {Daniel Seita and Pete Florence and Jonathan Tompson and Erwin Coumans and Vikas Sindhwani and Ken Goldberg and Andy Zeng},
    title   = {{Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks}},
    journal = {arXiv preprint arXiv:2012.03385},
    Year    = {2020}
}
Owner
Daniel Seita
Computer science Ph.D. student at UC Berkeley working in Artificial Intelligence.
Daniel Seita
Code for the paper "Curriculum Dropout", ICCV 2017

Curriculum Dropout Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability dis

Pietro Morerio 21 Jan 02, 2022
Beancount-mercury - Beancount importer for Mercury Startup Checking

beancount-mercury beancount-mercury provides an Importer for converting CSV expo

Michael Lynch 4 Oct 31, 2022
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Jingyun Liang 139 Dec 29, 2022
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity

[ICLR 2022] Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity by Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elen

VITA 18 Dec 31, 2022
Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers

DALLE2 Video (wip) ** only to be built after DALLE2 image is done and replicated, and the importance of the prior network is validated ** Direct appli

Phil Wang 105 May 15, 2022
Official code for the paper "Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks".

Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks This repository contains the official code for the

Linus Ericsson 11 Dec 16, 2022
Official Repository for the paper "Improving Baselines in the Wild".

iWildCam and FMoW baselines (WILDS) This repository was originally forked from the official repository of WILDS datasets (commit 7e103ed) For general

Kazuki Irie 3 Nov 24, 2022
Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021)

Transferable Semantic Augmentation for Domain Adaptation Code release for "Transferable Semantic Augmentation for Domain Adaptation" (CVPR 2021) Paper

66 Dec 16, 2022
Temporal-Relational CrossTransformers

Temporal-Relational Cross-Transformers (TRX) This repo contains code for the method introduced in the paper: Temporal-Relational CrossTransformers for

83 Dec 12, 2022
Repo for our ICML21 paper Unsupervised Learning of Visual 3D Keypoints for Control

Unsupervised Learning of Visual 3D Keypoints for Control [Project Website] [Paper] Boyuan Chen1, Pieter Abbeel1, Deepak Pathak2 1UC Berkeley 2Carnegie

Boyuan Chen 34 Jul 22, 2022
Crossover Learning for Fast Online Video Instance Segmentation (ICCV 2021)

TL;DR: CrossVIS (Crossover Learning for Fast Online Video Instance Segmentation) proposes a novel crossover learning paradigm to fully leverage rich c

Hust Visual Learning Team 79 Nov 25, 2022
Official PyTorch implementation of StyleGAN3

Modified StyleGAN3 Repo Changes Made tied to python 3.7 syntax .jpgs instead of .pngs for training sample seeds to recreate the 1024 training grid wit

Derrick Schultz (he/him) 83 Dec 15, 2022
Official PyTorch Implementation of "Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs". NeurIPS 2020.

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs This repository is the implementation of SELAR. Dasol Hwang* , Jinyoung Pa

MLV Lab (Machine Learning and Vision Lab at Korea University) 48 Nov 09, 2022
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
Histology images query (unsupervised)

110-1-NTU-DBME5028-Histology-images-query Final Project: Histology images query (unsupervised) Kaggle: https://www.kaggle.com/c/histology-images-query

1 Jan 05, 2022
NeRF visualization library under construction

NeRF visualization library using PlenOctrees, under construction pip install nerfvis Docs will be at: https://nerfvis.readthedocs.org import nerfvis s

Alex Yu 196 Jan 04, 2023
Deep Sea Treasure Environment for Multi-Objective Optimization Research

DeepSeaTreasure Environment Installation In order to get started with this environment, you can install it using the following command: python3 -m pip

imec IDLab 6 Nov 14, 2022
ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

ESGD-M - A stochastic non-convex second order optimizer, suitable for training deep learning models, for PyTorch

Katherine Crowson 53 Dec 29, 2022
For AILAB: Cross Lingual Retrieval on Yelp Search Engine

Cross-lingual Information Retrieval Model for Document Search Train Phase CUDA_VISIBLE_DEVICES="0,1,2,3" \ python -m torch.distributed.launch --nproc_

Chilia Waterhouse 104 Nov 12, 2022