Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.

Related tags

Deep Learninglila
Overview

LILA

LILA: Language-Informed Latent Actions

Code and Experiments for Language-Informed Latent Actions (LILA), for using natural language to guide assistive teleoperation.

This code bundles code that can be deployed on a Franka Emika Panda Arm, including utilities for processing collected demonstrations (you can find our actual demo data in the data/ directory!), training various LILA and Imitation Learning models, and running live studies.


Quickstart

Assumes lila is the current working directory! This repository also comes with out-of-the-box linting and strict pre-commit checking... should you wish to turn off this functionality you can omit the pre-commit install lines below. If you do choose to use these features, you can run make autoformat to automatically clean code, and make check to identify any violations.

Repository Structure

High-level overview of repository file-tree:

  • conf - Quinine Configurations (.yaml) for various runs (used in lieu of argparse or typed-argument-parser)
  • environments - Serialized Conda Environments for running on CPU. Other architectures/CUDA toolkit environments can be added here as necessary.
  • robot/ - Core libfranka robot control code -- simple joint velocity controll w/ Gripper control.
  • src/ - Source Code - has all utilities for preprocessing, Lightning Model definitions, utilities.
    • preprocessing/ - Preprocessing Code for creating Torch Datasets for Training LILA/Imitation Models.
    • models/ - Lightning Modules for LILA-FiLM and Imitation-FiLM Architectures.
  • train.py - Top-Level (main) entry point to repository, for training and evaluating models. Run this first, pointing it at the appropriate configuration in conf/!.
  • Makefile - Top-level Makefile (by default, supports conda serialization, and linting). Expand to your needs.
  • .flake8 - Flake8 Configuration File (Sane Defaults).
  • .pre-commit-config.yaml - Pre-Commit Configuration File (Sane Defaults).
  • pyproject.toml - Black and isort Configuration File (Sane Defaults).+ README.md - You are here!
  • README.md - You are here!
  • LICENSE - By default, research code is made available under the MIT License.

Local Development - CPU (Mac OS & Linux)

Note: Assumes that conda (Miniconda or Anaconda are both fine) is installed and on your path. Use the -cpu environment file.

conda env create -f environments/environment-cpu.yaml
conda activate lila
pre-commit install

GPU Development - Linux w/ CUDA 11.0

conda env create -f environments/environment-gpu.yaml  # Choose CUDA Kernel based on Hardware - by default used 11.0!
conda activate lila
pre-commit install

Note: This codebase should work naively for all PyTorch > 1.7, and any CUDA version; if you run into trouble building this repository, please file an issue!


Training LILA or Imitation Models

To train models using the already collected demonstrations.

# LILA
python train.py --config conf/lila-config.yaml

# No-Language Latent Actions
python train.py --config conf/no-lang-config.yaml

# Imitatation Learning (Behavioral Cloning w/ DART-style Augmentation)
python train.py --config conf/imitation-config.yaml

This will dump models to runs/{lila-final, no-lang-final, imitation-final}/. These paths are hard-coded in the respective teleoperation/execution files below; if you change these paths, be sure to change the below files as well!

Teleoperating with LILA or End-Effector Control

First, make sure to add the custom Velocity Controller written for the Franka Emika Panda Robot Arm (written using Libfranka) to ~/libfranka/examples on your robot control box. The controller can be found in robot/libfranka/lilaVelocityController.cpp.

Then, make sure to update the path of the model trained in the previous step (for LILA) in teleoperate.py. Finally, you can drop into controlling the robot with a LILA model (and Joystick - make sure it's plugged in!) with:

# LILA Control
python teleoperate.py

# For No-Language Control, just change the arch!
python teleoperate.py --arch no-lang

# Pure End-Effector Control is also implemented by Default
python teleoperate.py --arch endeff

Running Imitation Learning

Add the Velocity Controller as described above. Then, make sure to update the path to the trained model in imitate.py and run the following:

python imitate.py

Collecting Kinesthetic Demonstrations

Each lab (and corresponding robot) is built with a different stack, and different preferred ways of recording Kinesthetic demonstrations. We have a rudimentary script record.py that shows how we do this using sockets, and the default libfranka readState.cpp built-in script. This script dumps demonstrations that can be immediately used to train latent action models.

Start-Up from Scratch

In case the above conda environment loading does not work for you, here are the concrete package dependencies required to run LILA:

conda create --name lila python=3.8
conda activate lila
conda install pytorch torchvision torchaudio -c pytorch
conda install ipython jupyter
conda install pytorch-lightning -c conda-forge

pip install black flake8 isort matplotlib pre-commit pygame quinine transformers typed-argument-parser wandb
Owner
Sidd Karamcheti
PhD Student at Stanford & Research Intern at Hugging Face 🤗
Sidd Karamcheti
A simple, fast, and efficient object detector without FPN

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides an implementation for

789 Jan 09, 2023
Deep Latent Force Models

Deep Latent Force Models This repository contains a PyTorch implementation of the deep latent force model (DLFM), presented in the paper, Compositiona

Tom McDonald 5 Oct 26, 2022
3D detection and tracking viewer (visualization) for kitti & waymo dataset

3D detection and tracking viewer (visualization) for kitti & waymo dataset

222 Jan 08, 2023
UIUCTF 2021 Public Challenge Repository

UIUCTF-2021-Public UIUCTF 2021 Public Challenge Repository Notes: every challenge folder contains a challenge.yml file in the format for ctfcli, CTFd'

SIGPwny 15 Nov 03, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Architecture Patterns with Python (TDD, DDD, EDM)

architecture-traning Architecture Patterns with Python (TDD, DDD, EDM) Chapter 5. 높은 기어비와 낮은 기어비의 TDD 5.2 도메인 계층 테스트를 서비스 계층으로 옮겨야 하는가? 도메인 계층 테스트 def

minsung sim 2 Mar 04, 2022
DetCo: Unsupervised Contrastive Learning for Object Detection

DetCo: Unsupervised Contrastive Learning for Object Detection arxiv link News Sparse RCNN+DetCo improves from 45.0 AP to 46.5 AP(+1.5) with 3x+ms trai

Enze Xie 234 Dec 18, 2022
Action Segmentation Evaluation

Reference Action Segmentation Evaluation Code This repository contains the reference code for action segmentation evaluation. If you have a bug-fix/im

5 May 22, 2022
This repository is for the preprint "A generative nonparametric Bayesian model for whole genomes"

BEAR Overview This repository contains code associated with the preprint A generative nonparametric Bayesian model for whole genomes (2021), which pro

Debora Marks Lab 10 Sep 18, 2022
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

41 Dec 17, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control

FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control by Dimitri von Rütte, Luca Biggio, Yannic Kilcher, Thomas Hofmann FIGARO: Generat

Dimitri 83 Jan 07, 2023
Discover hidden deepweb pages

DeepWeb Scapper Att: Demo version An simple script to scrappe deepweb to find pages. Will return if any of those exists and will save on a file. You s

Héber Júlio 77 Oct 02, 2022
ICON: Implicit Clothed humans Obtained from Normals

ICON: Implicit Clothed humans Obtained from Normals arXiv, December 2021. Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black Table of C

Yuliang Xiu 1.1k Dec 30, 2022
use machine learning to recognize gesture on raspberrypi

Raspberrypi_Gesture-Recognition use machine learning to recognize gesture on raspberrypi 說明 利用 tensorflow lite 訓練手部辨識模型 分辨 "剪刀"、"石頭"、"布" 之手勢 再將訓練模型匯入

1 Dec 10, 2021
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022
SlotRefine: A Fast Non-Autoregressive Model forJoint Intent Detection and Slot Filling

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling Reference Main paper to be cited (Di Wu et al., 2020) @article

Moore 34 Nov 03, 2022
CondNet: Conditional Classifier for Scene Segmentation

CondNet: Conditional Classifier for Scene Segmentation Introduction The fully convolutional network (FCN) has achieved tremendous success in dense vis

ycszen 31 Jul 22, 2022
Dense Unsupervised Learning for Video Segmentation (NeurIPS*2021)

Dense Unsupervised Learning for Video Segmentation This repository contains the official implementation of our paper: Dense Unsupervised Learning for

Visual Inference Lab @TU Darmstadt 173 Dec 26, 2022
State-of-the-art language models can match human performance on many tasks

Status: Archive (code is provided as-is, no updates expected) Grade School Math [Blog Post] [Paper] State-of-the-art language models can match human p

OpenAI 259 Jan 08, 2023