An OpenAI-Gym Package for Training and Testing Reinforcement Learning algorithms with OpenSim Models

Overview

Logo

Authors: Utkarsh A. Mishra and Dr. Dimitar Stanev

Advisors: Dr. Dimitar Stanev and Prof. Auke Ijspeert, Biorobotics Laboratory (BioRob), EPFL

Video Playlist: https://www.youtube.com/playlist?list=PLDvnH871wUkFPOcCKcsTN6ZzzjNZOVlt_

The bioimiitation-gym package is a python package that provides a gym environment for training and testing OpenSim models. The gym environment is based on the OpenAI gym package.

This work is towards a framework aimed towards learning to imitate human gaits. Humans exhibit movements like walking, running, and jumping in the most efficient manner, which served as the source of motivation for this project. Skeletal and Musculoskeletal human models were considered for motions in the sagittal and frontal plane, and results from both were compared exhaustively. While skeletal models are driven with motor actuation, musculoskeletal models perform through muscle-tendon actuation.

Baseline Architecture

Model-free reinforcement learning algorithms were used to optimize inverse dynamics control actions to satisfy the objective of imitating a reference motion along with secondary objectives of minimizing effort in terms of power spent by motors and metabolic energy consumed by the muscles. On the one hand, the control actions for the motor actuated model is the target joint angles converted into joint torques through a Proportional-Differential controller. While on the other hand, the control actions for the muscle-tendon actuated model is the muscle excitations converted implicitly to muscle activations and then to muscle forces which apply moments on joints. Muscle-tendon actuated models were found to have superiority over motor actuation as they are inherently smooth due to muscle activation dynamics and don't need any external regularizers.

Results

All the results and analysis are presented in an illustrative, qualitative, and quantitative manner.

Installation

Please follow the instructions in the installation.md file to install the package.

Environment in the bioimitation-gym package

All environments in the bioimitation-gym package are provided in the biomitation_envs/imitation_envs/envs directory. They are majorly divided into two categories:

  • muscle environments: These are the environments that are used for training the muscle tendon unit actuated model.
  • torque environments: These are the environments that are used for training the torque actuate model.

Further, 2D / planar and 3D / spatial environments are provided for each category. The tasks covered in each of the sub-categories are as follows:

  • Walking
  • Running
  • Jumping
  • Prosthetic Walking with a locked knee joint for the left leg
  • Walking with a typical Cerebel Palsy defect

The following 2D muscle actuated environment names can be used based on the package:

  • MuscleWalkingImitation2D-v0
  • MuscleRunningImitation2D-v0
  • MuscleJumpingImitation2D-v0
  • MuscleLockedKneeImitation2D-v0

The following 3D muscle actuated environment names can be used based on the package:

  • MuscleWalkingImitation3D-v0
  • MuscleRunningImitation3D-v0
  • MuscleJumpingImitation3D-v0
  • MuscleLockedKneeImitation3D-v0
  • MusclePalsyImitation3D-v0

The following 2D torque actuated environment names can be used based on the package:

  • TorqueWalkingImitation2D-v0
  • TorqueRunningImitation2D-v0
  • TorqueJumpingImitation2D-v0
  • TorqueLockedKneeImitation2D-v0

The following 3D torque actuated environment names can be used based on the package:

  • TorqueWalkingImitation3D-v0
  • TorqueRunningImitation3D-v0
  • TorqueJumpingImitation3D-v0
  • TorqueLockedKneeImitation3D-v0

Usage Instructions

The complete bioimitation directory consists of the following sub-directories:

  • imitation_envs: This directory contains the data and environments associated with the package.
  • learning_algorithm: This directory contains the learning algorithm used for several experiments. The code is the modified version of original SAC algorithm and is taken from the open source implementation of ikostrikov/jaxrl.

More information on the subdirectories can be found in their respective README files (if any).

The package is mostly based on the highly scalable and distributed reinforcement learning framework Ray RLLIB. The template scipts to train and test the models are provided in the tests directory.

To run a RLLIB training script, run the following command:

python tests/sample_rllib_training.py  --env_name MuscleWalkingImitation2D-v0

You can change the algorithm configurations in the configs directory. The configs/train_default.py file contains the default configuration for the train script and the configs/test_default.py file contains the default configuration for the test script which is:

python tests/sample_rllib_testing.py

The default environment configuration is provided in the configs/env_default.py file. Feel free to change the default configuration as per your needs. A typical script to test the environment is provided in the biomitation_envs/imitation_envs/envs directory is:

import os
from absl import app, flags
from ml_collections import config_flags
import gym
import bioimitation

FLAGS = flags.FLAGS

flags.DEFINE_string('env_name', 'MuscleWalkingImitation2D-v0', 'Name of the environment.')

config_flags.DEFINE_config_file(
    'config',
    'configs/env_default.py',
    'File path to the environment configuration.',
    lock_config=False)

def main(_):

    example_config = dict(FLAGS.config)

    env = gym.make(FLAGS.env_name, config=example_config)

    env.reset()

    for i in range(1000):
        _, _, done, _ = env.step(env.action_space.sample())
        if done:
            env.reset()

if __name__ == '__main__':
    app.run(main)

Don't forget to import the bioimitation package before running the script.

Citation

If you use this work in your research, please cite the following as:

@misc{
    mishra2021bioimitation,
    title = {BioImitation-Gym: A OpenAI-Gym Package for Training and Testing Reinforcement Learning algorithms with OpenSim Models},
    author = {Utkarsh A. Mishra and Dimitar Stanev and Auke Ijspeert},
    year = {2021},
    url = {https://github.com/UtkarshMishra/bioimitation-gym}
}
@article{mishra2021learning,
  title={Learning Control Policies for Imitating Human Gaits},
  author={Utkarsh A. Mishra},
  journal={arXiv preprint arXiv:2106.15273},
  year={2021}
}

References

[1] OsimRL project: https://osim-rl.kidzinski.com/

[2] OpenSim: https://github.com/opensim-org/opensim-core and https://opensim.stanford.edu/

[3] OpenAI Gym: https://gym.openai.com/

[4] Ray RLLIB: https://ray.readthedocs.io/en/latest/

[6] ikostrikov/jaxrl: https://github.com/ikostrikov/jaxrl

Owner
Utkarsh Mishra
Graduate from @iitroorkee (Batch of 2021), programming enthusiast. Reinforcement Learning, Robotics & Self-Driving interests me.
Utkarsh Mishra
atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022
Code for Learning Manifold Patch-Based Representations of Man-Made Shapes, in ICLR 2021.

LearningPatches | Webpage | Paper | Video Learning Manifold Patch-Based Representations of Man-Made Shapes Dmitriy Smirnov, Mikhail Bessmeltsev, Justi

Dima Smirnov 22 Nov 14, 2022
Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".

A2S-USOD Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection". Code will be released upon

15 Dec 16, 2022
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
An Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering

PC-SOS-SDP: an Exact Solver for Semi-supervised Minimum Sum-of-Squares Clustering PC-SOS-SDP is an exact algorithm based on the branch-and-bound techn

Antonio M. Sudoso 1 Nov 13, 2022
Rotary Transformer

[中文|English] Rotary Transformer Rotary Transformer is an MLM pre-trained language model with rotary position embedding (RoPE). The RoPE is a relative

325 Jan 03, 2023
Epidemiology analysis package

zEpid zEpid is an epidemiology analysis package, providing easy to use tools for epidemiologists coding in Python 3.5+. The purpose of this library is

Paul Zivich 111 Jan 08, 2023
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

52 Dec 29, 2022
[CVPR 2021 Oral] ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis [arxiv|pdf|v

Yinan He 78 Dec 22, 2022
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

Intelligent Vision for Robotics in Complex Environment 10 Nov 07, 2022
A minimal yet resourceful implementation of diffusion models (along with pretrained models + synthetic images for nine datasets)

A minimal yet resourceful implementation of diffusion models (along with pretrained models + synthetic images for nine datasets)

Vikash Sehwag 65 Dec 19, 2022
Keras implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Erik Linder-Norén 8.9k Jan 04, 2023
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 03, 2023
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
Code for CVPR 2021 paper: Anchor-Free Person Search

Introduction This is the implementationn for Anchor-Free Person Search in CVPR2021 License This project is released under the Apache 2.0 license. Inst

158 Jan 04, 2023
Our VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks.

VMAgent is a platform for exploiting Reinforcement Learning (RL) on Virtual Machine (VM) scheduling tasks. VMAgent is constructed based on one month r

56 Dec 12, 2022
Management Dashboard for Torchserve

Torchserve Dashboard Torchserve Dashboard using Streamlit Related blog post Usage Additional Requirement: torchserve (recommended:v0.5.2) Simply run:

Ceyda Cinarel 103 Dec 10, 2022
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Wei†, Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX Foolbox is a Python li

Bethge Lab 2.4k Dec 25, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022