CARL provides highly configurable contextual extensions to several well-known RL environments.

Related tags

Deep LearningCARL
Overview

The CARL Benchmark Library

CARL (context adaptive RL) provides highly configurable contextual extensions to several well-known RL environments. It's designed to test your agent's generalization capabilities in all scenarios where intra-task generalization is important.

Benchmarks include:

  • OpenAI gym classic control suite extended with several physics context features like gravity or friction

  • OpenAI gym Box2D BipedalWalker, LunarLander and CarRacing, each with their own modification possibilities like new vehicles to race

  • All Brax locomotion environments with exposed internal features like joint strength or torso mass

  • Super Mario (TOAD-GAN), a procedurally generated jump'n'run game with control over level similarity

  • RNADesign, an environment for RNA design given structure constraints with structures from different datasets to choose from

Screenshot of each environment included in CARL.

Installation

We recommend you use a virtual environment (e.g. Anaconda) to install CARL and its dependencies. We recommend and test with python 3.9 under Linux.

First, clone our repository and install the basic requirements:

git clone https://github.com/automl/CARL.git --recursive
cd CARL
pip install .

This will only install the basic classic control environments, which should run on most operating systems. For the full set of environments, use the install options:

pip install -e .[box2d, brax, rna, mario]

These may not be compatible with Windows systems. Box2D environment may need to be installed via conda on MacOS systems:

conda install -c conda-forge gym-box2d

In general, we test on Linux systems, but aim to keep the benchmark compatible with MacOS as much as possible. Mario at this point, however, will not run on any operation system besides Linux

To install the additional requirements for ToadGAN:

javac src/envs/mario/Mario-AI-Framework/**/*.java

If you want to use the RNA design environment:

cd src/envs/rna/learna
make requirements
make data

In case you want to run our experiments or use our training files, also install the experiment dependencies:

pip install -e .[experiments]

Train an Agent

To get started with CARL, you can use our 'train.py' script. It will train a PPO agent on the environment of your choice with custom context variations that are sampled from a standard deviation.

To use MetaCartPole with variations in gravity and friction by 20% compared to the default, run:

python train.py 
--env CARLCartPoleEnv 
--context_args gravity friction
--default_sample_std_percentage 0.2
--outdir <result_location>

You can use the plotting scripts in src/eval to view the results.

CARL's Contextual Extension

CARL contextually extends the environment by making the context visible and configurable. During training we therefore can encounter different contexts and train for generalization. We exemplarily show how Brax' Fetch is extended and embedded by CARL. Different instiations can be achieved by setting the context features to different values.

CARL contextually extends Brax' Fetch.

Cite Us

@misc{CARL,
  author    = {C. Benjamins and 
               T. Eimer and 
               F. Schubert and 
               A. Biedenkapp and 
               B. Rosenhahn and 
               F. Hutter and 
               M. Lindauer},
  title     = {CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning},
  howpublished = {https://github.com/automl/CARL},
  year      = {2021},
  month     = aug,
}

References

OpenAI gym, Brockman et al., 2016. arXiv preprint arXiv:1606.01540

Brax -- A Differentiable Physics Engine for Large Scale Rigid Body Simulation, Freeman et al., NeurIPS 2021 (Dataset & Benchmarking Track)

TOAD-GAN: Coherent Style Level Generation from a Single Example, Awiszus et al., AIIDE 2020

Learning to Design RNA, Runge et al., ICRL 2019

License

CARL falls under the Apache License 2.0 (see file 'LICENSE') as is permitted by all work that we use. This includes CARLMario, which is not based on the Nintendo Game, but on TOAD-GAN and TOAD-GUI running under an MIT license. They in turn make use of the Mario AI framework (https://github.com/amidos2006/Mario-AI-Framework). This is not the original game but a replica, explicitly built for research purposes and includes a copyright notice (https://github.com/amidos2006/Mario-AI-Framework#copyrights ).

Comments
  • Rna fixup

    Rna fixup

    RNA is now better documented and more easily runnable. There's also an option to subsample the datasets instead of always using all instances per context.

    The thing that's missing right now are more context options like filtering by solvers or GC-content, but those aren't easily extractable from our data right now, so that's a separate work package all together.

    opened by TheEimer 6
  • Gym 0.22.0

    Gym 0.22.0

    • update required minimum gym version number
    • added pygame as a requirement because it is not picked up by the gym requirements
    • getting rid of CustomBipedalWalkerEnv because the functionality of changing the gravity is covered by CARLEnv (same for CustomLunarLanderEnv)
    • add high game over penalty for LunarLander by a wrapper
    opened by benjamc 6
  • Instance selection

    Instance selection

    Instance selection now is a class. Default is still roundrobin selection. An instance is only selected when env.reset() (or to be more specific, _progress_instance() is called.

    opened by benjamc 4
  • Added Encoders

    Added Encoders

    Context encoders have been added as a folder and an experiment for running the encoder added in the experiments folders. Since the working directory is the experiment one, I had to add an absolute path for the saved weights. This might need to be changed in the config file

    opened by amsks 4
  • Update References with correct conference

    Update References with correct conference

    Thanks for the pointer to the survey, but it hasn't been published anywhere, so that detail is incorrect (I wouldn't want to claim that it's published somewhere when it isn't).

    opened by RobertKirk 3
  • Performance Deviations in Brax

    Performance Deviations in Brax

    Comparing HalfCheetah in Brax (via gym.make and then wrapped as here: https://github.com/google/brax/blob/main/notebooks/training_torch.ipynb) vs in CARL makes a big difference in return even when the context is kept static. Do we do any unexpected reward normalization? Does the way we reset the env make a difference compared to theirs (as we actually update the simluation)?

    bug 
    opened by TheEimer 2
  • Integrate DM Control

    Integrate DM Control

    • [ ] (convert test file to jupyter notebook. I would like to keep that)
    • [ ] check tests / write more to increase coverage
    • [x] update README.md
    • [x] update documentation
    • [x] add dm_control to requirements
    • [x] support dict observation space
    documentation tests 
    opened by benjamc 2
  • Fix gym version

    Fix gym version

    Gym released a new version where the signature of the step function has changed. This affects our code and requires a separate PR. For now, fix the gym version.

    opened by benjamc 1
  • Initial statedistrs #48

    Initial statedistrs #48

    #48 Make initial state distribution configurable. So far, only uniform distributions are used and the bounds can be adjusted.

    Classic control:

    • [x] Acrobot
    • [x] Pendulum
    • [x] MountainCar (normal distribution instead of uniform)
    • [x] MountainCarContinuous (uniform distribution)
    • [x] CartPole

    Box2d

    • [x] LunarLander

    • [ ] (maybe/later) Make distributions fully configurable by passing the distribution class and its parameters.

    • [x] Update documentation: Contexts are automatically filled with the default context if underspecified.

    opened by benjamc 1
  • Integrate dmcontrol

    Integrate dmcontrol

    Add support for dm control environments. Integrated walker, quadruped and fish.

    In dmc environments there is an additional setting for the context, namely the context mask, which can reduce the amount of context features.

    opened by sebidoe 1
  • use appropriate library for building states

    use appropriate library for building states

    So far, when we do not hide the context, we concatenate the context to the state. For jax based environments (brax) this means that the state is converted from a jax to a numpy array. Now, the state builder checks which library to use and keeps jax states as jax arrays and numpy states as numpy arrays.

    Noticed in #42.

    opened by benjamc 1
  • AttributeError: 'System' object has no attribute 'body_idx' in brax

    AttributeError: 'System' object has no attribute 'body_idx' in brax

    when running test/test_all_envs.py, there is AttributeError: 'System' object has no attribute 'body_idx' in carl_fetch and carl_humanoid environments.

    opened by andy-james0310 3
Releases(v0.2.0)
  • v0.2.0(Jul 12, 2022)

    • Integrate dm control environments (#55)
    • Add context masks to only append those to the state (#54)
    • Extend classic control environments to parametrize initial state distributions (#52)
    • Remove RNA environment for maintenance (#61)
    • Fixed pre-commit (mypy, black, flake8, isort) (#62)
    Source code(tar.gz)
    Source code(zip)
Owner
AutoML-Freiburg-Hannover
AutoML-Freiburg-Hannover
Official PyTorch Implementation of Embedding Transfer with Label Relaxation for Improved Metric Learning, CVPR 2021

Embedding Transfer with Label Relaxation for Improved Metric Learning Official PyTorch implementation of CVPR 2021 paper Embedding Transfer with Label

Sungyeon Kim 37 Dec 06, 2022
RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality?

RaftMLP RaftMLP: How Much Can Be Done Without Attention and with Less Spatial Locality? By Yuki Tatsunami and Masato Taki (Rikkyo University) [arxiv]

Okojo 20 Aug 31, 2022
MoveNet Single Pose on DepthAI

MoveNet Single Pose tracking on DepthAI Running Google MoveNet Single Pose models on DepthAI hardware (OAK-1, OAK-D,...). A convolutional neural netwo

64 Dec 29, 2022
A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking

PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking PoseRBPF Paper Self-supervision Paper Pose Estimation Video Robot Manipulati

NVIDIA Research Projects 107 Dec 25, 2022
Nsdf: A mesh SDF with just some code we can directly paste into our raymarcher

nsdf Representing SDFs of arbitrary meshes has been a bit tricky so far. Express

Jan Ivanecky 5 Feb 18, 2022
Atomistic Line Graph Neural Network

Table of Contents Introduction Installation Examples Pre-trained models Quick start using colab JARVIS-ALIGNN webapp Peformances on a few datasets Use

National Institute of Standards and Technology 91 Dec 30, 2022
Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

CoTuning Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning. [News] 2021/01/13 The COCO 70 dataset used in the paper is av

THUML @ Tsinghua University 35 Sep 23, 2022
This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT).

Dynamic-Vision-Transformer (Pytorch) This repo contains the official code and pre-trained models for the Dynamic Vision Transformer (DVT). Not All Ima

210 Dec 18, 2022
Code for unmixing audio signals in four different stems "drums, bass, vocals, others". The code is adapted from "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Disclaimer This code is a based on "Jukebox: A Generative Model for Music" Paper We adju

Wadhah Zai El Amri 24 Dec 29, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Visualizing Yolov5's layers using GradCam

YOLO-V5 GRADCAM I constantly desired to know to which part of an object the object-detection models pay more attention. So I searched for it, but I di

Pooya Mohammadi Kazaj 200 Jan 01, 2023
Utilities and information for the signals.numer.ai tournament

dsignals Utilities and information for the signals.numer.ai tournament using eodhistoricaldata.com eodhistoricaldata.com provides excellent historical

Degerhan Usluel 23 Dec 18, 2022
[ICCV2021] IICNet: A Generic Framework for Reversible Image Conversion

IICNet - Invertible Image Conversion Net Official PyTorch Implementation for IICNet: A Generic Framework for Reversible Image Conversion (ICCV2021). D

felixcheng97 55 Dec 06, 2022
A unified framework to jointly model images, text, and human attention traces.

connect-caption-and-trace This repository contains the reference code for our paper Connecting What to Say With Where to Look by Modeling Human Attent

Meta Research 73 Oct 24, 2022
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
Notification Triggers for Python

Notipyer Notification triggers for Python Send async email notifications via Python. Get updates/crashlogs from your scripts with ease. Installation p

Chirag Jain 17 May 16, 2022
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Toward Realistic Single-View 3D Object Reconstruction with Unsupervised Learning from Multiple Images (ICCV 2021)

Table of Content Introduction Getting Started Datasets Installation Experiments Training & Testing Pretrained models Texture fine-tuning Demo Toward R

VinAI Research 42 Dec 05, 2022
GPU Programming with Julia - course at the Swiss National Supercomputing Centre (CSCS), ETH Zurich

Course Description The programming language Julia is being more and more adopted in High Performance Computing (HPC) due to its unique way to combine

Samuel Omlin 192 Jan 03, 2023
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022