The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Related tags

Deep Learningeirli
Overview

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Documentation status Dataset download link

Over the past handful of years, representation learning has exploded as a subfield, and, with it have come a plethora of new methods, each slightly different from the other.

Our Empirical Investigation of Representation Learning for Imitation (EIRLI) has two main goals:

  1. To create a modular algorithm definition system that allows researchers to easily pick and choose from a wide array of commonly used design axes
  2. To facilitate testing of representations within the context of sequential learning, particularly imitation learning and offline reinforcement learning

Common Use Cases

Do you want to…

  • Reproduce our results? You can find scripts and instructions here to help reproduce our benchmark results.
  • Design and experiment with a new representation learning algorithm using our modular components? You can find documentation on that here
  • Use our algorithm definitions in a setting other than sequential learning? The base example here demonstrates this simplified use case

Otherwise, you can see our full ReadTheDocs documentation here.

Modular Algorithm Design

This library was designed in a way that breaks down the definition of a representation learning algorithm into several key parts. The intention was that this system be flexible enough many commonly used algorithms can be defined through different combinations of these modular components.

The design relies on the central concept of a "context" and a "target". In very rough terms, all of our algorithms work by applying some transformation to the context, some transformation to the target, and then calculating a loss as a function of those two transformations. Sometimes an extra context object is passed in

Some examples are:

  • In SimCLR, the context and target are the same image frame, and augmentation and then encoding is applied to both context and target. That learned representation is sent through a decoder, and then the context and target representations are pulled together with a contrastive loss.
  • In TemporalCPC, the context is a frame at time t, and the target a frame at time t+k, and then, similarly to SimCLR above, augmentation is applied to the frame before it's put through an encoder, and the two resulting representations pulled together
  • In a Variational Autoencoder, the context and target are the same image frame. An bottleneck encoder and then a reconstructive decoder are applied to the context, and this reconstructed context is compared to the target through a L2 pixel loss
  • A Dynamics Prediction model can be seen as an conceptual combination of an autoencoder (which tries to predict the current full image frame) and TemporalCPC, which predicts future information based on current information. In the case of a Dynamics model, we predict a future frame (the target) given the current frame (context) and an action as extra context.

This abstraction isn't perfect, but we believe it is coherent enough to allow for a good number of shared mechanisms between algorithms, and flexible enough to support a wide variety of them.

The modular design mentioned above is facilitated through the use of a number of class interfaces, each of which handles a different component of the algorithm. By selecting different implementations of these shared interfaces, and creating a RepresentationLearner that takes them as arguments, and handles the base machinery of performing transformations.

A diagram showing how these components made up a training pipeline for our benchmark

  1. TargetPairConstructer - This component takes in a set of trajectories (assumed to be iterators of dicts containing 'obs' and optional 'acts', and 'dones' keys) and creates a dataset of (context, target, optional extra context) pairs that will be shuffled to form the training set.
  2. Augmenter - This component governs whether either or both of the context and target objects are augmented before being passed to the encoder. Note that this concept only meaningfully applies when the object being augmented is an image frame.
  3. Encoder - The encoder is responsible for taking in an image frame and producing a learned vector representation. It is optionally chained with a Decoder to produce the input to the loss function (which may be a reconstructed image in the case of VAE or Dynamics, or may be a projected version of the learned representation in the case of contrastive methods like SimCLR that use a projection head)
  4. Decoder - As mentioned above, the Decoder acts as a bridge between the representation in the form you want to use for transfer, and whatever input is required your loss function, which is often some transformation of that canonical representation.
  5. BatchExtender - This component is used for situations where you want to calculate loss on batch elements that are not part of the batch that went through your encoder and decoder on this step. This is centrally used for contrastive methods that use momentum, since in that case, you want to use elements from a cached store of previously-calculated representations as negatives in your contrastive loss
  6. LossCalculator - This component takes in the transformed context and transformed target and handles the loss calculation, along with any transformations that need to happen as a part of that calculation.

Training Scripts

In addition to machinery for constructing algorithms, the repo contains a set of Sacred-based training scripts for testing different Representation Learning algorithms as either pretraining or joint training components within an imitation learning pipeline. These are likeliest to be a fit for your use case if you want to reproduce our results, or train models in similar settings

Owner
Center for Human-Compatible AI
CHAI seeks to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
Center for Human-Compatible AI
Numerical Methods with Python, Numpy and Matplotlib

Numerical Bric-a-Brac Collections of numerical techniques with Python and standard computational packages (Numpy, SciPy, Numba, Matplotlib ...). Diffe

Vincent Bonnet 10 Dec 20, 2021
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
Unadversarial Examples: Designing Objects for Robust Vision

Unadversarial Examples: Designing Objects for Robust Vision This repository contains the code necessary to replicate the major results of our paper: U

Microsoft 93 Nov 28, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
Discovering Explanatory Sentences in Legal Case Decisions Using Pre-trained Language Models.

Statutory Interpretation Data Set This repository contains the data set created for the following research papers: Savelka, Jaromir, and Kevin D. Ashl

17 Dec 23, 2022
Reproducing code of hair style replacement method from Barbershorp.

Barbershorp Reproducing code of hair style replacement method from Barbershorp. Also reproduces II2S, an improved version of Image2StyleGAN. Requireme

1 Dec 24, 2021
Blender Add-On for slicing meshes with planes

MeshSlicer Blender Add-On for slicing meshes with multiple overlapping planes at once. This is a simple Blender addon to slice a silmple mesh with mul

52 Dec 12, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Meta Incubator 272 Jan 02, 2023
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
Task-based end-to-end model learning in stochastic optimization

Task-based End-to-end Model Learning in Stochastic Optimization This repository is by Priya L. Donti, Brandon Amos, and J. Zico Kolter and contains th

CMU Locus Lab 164 Dec 29, 2022
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
LinkNet - This repository contains our Torch7 implementation of the network developed by us at e-Lab.

LinkNet This repository contains our Torch7 implementation of the network developed by us at e-Lab. You can go to our blogpost or read the article Lin

e-Lab 158 Nov 11, 2022
An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow

Fast Transformer This repo implements Fastformer: Additive Attention Can Be All You Need by Wu et al. in TensorFlow. Fast Transformer is a Transformer

Rishit Dagli 139 Dec 28, 2022
Code for ICCV 2021 paper Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes using Scene Graphs

Graph-to-3D This is the official implementation of the paper Graph-to-3d: End-to-End Generation and Manipulation of 3D Scenes Using Scene Graphs | arx

Helisa Dhamo 33 Jan 06, 2023
PyTorch implementation of Lip to Speech Synthesis with Visual Context Attentional GAN (NeurIPS2021)

Lip to Speech Synthesis with Visual Context Attentional GAN This repository contains the PyTorch implementation of the following paper: Lip to Speech

6 Nov 02, 2022
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for Trans

Zhuang AI Group 105 Dec 06, 2022