Continual World is a benchmark for continual reinforcement learning

Overview

Continual World

Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld.

The core of our benchmark is CW20 sequence, in which 20 tasks are run, each with budget of 1M steps.

We provide the complete source code for the benchmark together with the tested algorithms implementations and code for producing result tables and plots.

See also the paper and the website.

CW20 sequence

Installation

You can either install directly in Python environment (like virtualenv or conda), or build containers -- Docker or Singularity.

Standard installation (directly in environment)

First, you'll need MuJoCo simulator. Please follow the instructions from mujoco_py package. As MuJoCo has been made freely available, you can obtain a free license here.

Next, go to the main directory of this repo and run

pip install .

Alternatively, if you want to install in editable mode, run

pip install -e .

Docker image

  • To build the image with continualworld package installed inside, run docker build . -f assets/Dockerfile -t continualworld

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run docker build . -f assets/Dockerfile -t continualworld --build-arg INSTALL_CW_PACKAGE=false

When the image is ready, you can run

docker run -it continualworld bash

to get inside the image.

Singularity image

  • To build the image with continualworld package installed inside, run singularity build continualworld.sif assets/singularity.def

  • To build the image WITHOUT the continualworld package but with all the dependencies installed, run singularity build continualworld.sif assets/singularity_only_deps.def

When the image is ready, you can run

singularity shell continualworld.sif

to get inside the image.

Running

You can run single task, continual learning or multi-task learning experiments with run_single.py, run_cl.py , run_mt.py scripts, respectively.

To see available script arguments, run with --help option, e.g.

python3 run_single.py --help

Examples

Below are given example commands that will run experiments with a very limited scale.

Single task

python3 run_single.py --seed 0 --steps 2e3 --log_every 250 --task hammer-v1 --logger_output tsv tensorboard

Continual learning

python3 run_cl.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW20 --cl_method ewc --cl_reg_coef 1e4 --logger_output tsv tensorboard

Multi-task learning

python3 run_mt.py --seed 0 --steps_per_task 2e3 --log_every 250 --tasks CW10 --use_popart True --logger_output tsv tensorboard

Reproducing the results from the paper

Commands to run experiments that reproduce main results from the paper can be found in examples/paper_cl_experiments.sh, examples/paper_mt_experiments.sh and examples/paper_single_experiments.sh. Because of number of different runs that these files contain, it is infeasible to just run it in sequential manner. We hope though that these files will be helpful because they precisely specify what needs to be run.

After the logs from runs are gathered, you can produce tables and plots - see the section below.

Producing result tables and plots

After you've run experiments and you have saved logs, you can run the script to produce result tables and plots:

python produce_results.py --cl_logs examples/logs/cl --mtl_logs examples/logs/mtl --baseline_logs examples/logs/baseline

In this command, respective arguments should be replaced for paths to directories containing logs from continual learning experiments, multi-task experiments and baseline (single-task) experiments. Each of these should be a directory inside which there are multiple experiments, for different methods and/or seeds. You can see the directory structure in the example logs included in the command above.

Results will be produced and saved on default to the results directory.

Alternatively, check out nb_produce_results.ipynb notebook to see plots and tables in the notebook.

Download our saved logs and produce results

You can download logs of experiments to reproduce paper's results from here. Then unzip the file and run

python produce_results.py --cl_logs saved_logs/cl --mtl_logs saved_logs/mt --baseline_logs saved_logs/single

to produce tables and plots.

As a result, a csv file with results will be produced, as well as the plots, like this one (and more!):

average performance

Full output can be found here.

Acknowledgements

Continual World heavily relies on MetaWorld.

The implementation of SAC used in our code comes from Spinning Up in Deep RL.

Our research was supported by the PLGrid infrastructure.

Our experiments were managed using Neptune.

Official implementation of EfficientPose

EfficientPose This is the official implementation of EfficientPose. We based our work on the Keras EfficientDet implementation xuannianz/EfficientDet

2 May 17, 2022
My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs (GNN, GAT, GraphSAGE, GCN)

machine-learning-with-graphs My solutions for Stanford University course CS224W: Machine Learning with Graphs Fall 2021 colabs Course materials can be

Marko Njegomir 7 Dec 14, 2022
Everything you need to know about NumPy( Creating Arrays, Indexing, Math,Statistics,Reshaping).

Everything you need to know about NumPy( Creating Arrays, Indexing, Math,Statistics,Reshaping).

1 Feb 14, 2022
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work

BasicNeuralNetwork - This project looks over the basic structure of a neural network and how machine learning training algorithms work. For this project, I used the sigmoid function as an activation

Manas Bommakanti 1 Jan 22, 2022
AntroPy: entropy and complexity of (EEG) time-series in Python

AntroPy is a Python 3 package providing several time-efficient algorithms for computing the complexity of time-series. It can be used for example to e

Raphael Vallat 153 Dec 27, 2022
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 03, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Seamless Manga Inpainting with Semantics Awareness [SIGGRAPH 2021](To appear) | Project Website | BibTex Introduction: Manga inpainting fills up the d

101 Jan 01, 2023
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
A collection of easy-to-use, ready-to-use, interesting deep neural network models

Interesting and reproducible research works should be conserved. This repository wraps a collection of deep neural network models into a simple and un

Aria Ghora Prabono 16 Jun 16, 2022
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
This is the code for the paper "Motion-Focused Contrastive Learning of Video Representations" (ICCV'21).

Motion-Focused Contrastive Learning of Video Representations Introduction This is the code for the paper "Motion-Focused Contrastive Learning of Video

11 Sep 23, 2022
Tensorflow-seq2seq-tutorials - Dynamic seq2seq in TensorFlow, step by step

seq2seq with TensorFlow Collection of unfinished tutorials. May be good for educational purposes. 1 - simple sequence-to-sequence model with dynamic u

Matvey Ezhov 1k Dec 17, 2022
The source code for CATSETMAT: Cross Attention for Set Matching in Bipartite Hypergraphs

catsetmat The source code for CATSETMAT: Cross Attention for Set Matching in Bipartite Hypergraphs To be able to run it, add catsetmat to PYTHONPATH H

2 Dec 19, 2022
CMP 414/765 course repository for Spring 2022 semester

CMP414/765: Artificial Intelligence Spring2021 This is the GitHub repository for course CMP 414/765: Artificial Intelligence taught at The City Univer

ch00226855 4 May 16, 2022
Xi Dongbo 78 Nov 29, 2022
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model

This repository contains the similarity metrics designed and evaluated in the paper, and instructions and code to re-run the experiments. Implementation in the deep-learning framework PyTorch

Steffen 86 Dec 27, 2022
FID calculation with proper image resizing and quantization steps

clean-fid: Fixing Inconsistencies in FID Project | Paper The FID calculation involves many steps that can produce inconsistencies in the final metric.

Gaurav Parmar 606 Jan 06, 2023