Dynamical movement primitives (DMPs), probabilistic movement primitives (ProMPs), spatially coupled bimanual DMPs.

Overview

codecov

Movement Primitives

Movement primitives are a common group of policy representations in robotics. There are many different types and variations. This repository focuses mainly on imitation learning, generalization, and adaptation of movement primitives. It provides implementations in Python and Cython.

Features

  • Dynamical Movement Primitives (DMPs) for
    • positions (with fast Runge-Kutta integration)
    • Cartesian position and orientation (with fast Cython implementation)
    • Dual Cartesian position and orientation (with fast Cython implementation)
  • Coupling terms for synchronization of position and/or orientation of dual Cartesian DMPs
  • Propagation of DMP weight distribution to state space distribution
  • Probabilistic Movement Primitives (ProMPs)

API Documentation

The API documentation is available here.

Install Library

This library requires Python 3.6 or later and pip is recommended for the installation. In the following instructions, we assume that the command python refers to Python 3. If you use the system's Python version, you might have to add the flag --user to any installation command.

I recommend to install the library via pip in editable mode:

python -m pip install -e .[all]

If you don't want to have all dependencies installed, just omit [all]. Alternatively, you can install dependencies with

python -m pip install -r requirements.txt

You could also just build the Cython extension with

python setup.py build_ext --inplace

or install the library with

python setup.py install

Non-public Extensions

Note that scripts from the subfolder examples/external_dependencies/ require access to git repositories (URDF files or optional dependencies) that are not publicly available.

MoCap Library

# untested: pip install git+https://git.hb.dfki.de/dfki-interaction/mocap.git
git clone [email protected]:dfki-interaction/mocap.git
cd mocap
python -m pip install -e .
cd ..

Get URDFs

# RH5
git clone [email protected]:models-robots/rh5_models/pybullet-only-arms-urdf.git --recursive
# RH5v2
git clone [email protected]:models-robots/rh5v2_models/pybullet-urdf.git --recursive
# Kuka
git clone [email protected]:models-robots/kuka_lbr.git
# Solar panel
git clone [email protected]:models-objects/solar_panels.git
# RH5 Gripper
git clone [email protected]:motto/abstract-urdf-gripper.git --recursive

Data

I assume that your data is located in the folder data/ in most scripts. You should put a symlink there to point to your actual data folder.

Build API Documentation

You can build an API documentation with pdoc3. You can install pdoc3 with

pip install pdoc3

... and build the documentation from the main folder with

pdoc movement_primitives --html

It will be located at html/movement_primitives/index.html.

Test

To run the tests some python libraries are required:

python -m pip install -e .[test]

The tests are located in the folder test/ and can be executed with: python -m nose test

This command searches for all files with test and executes the functions with test_*.

Contributing

To add new features, documentation, or fix bugs you can open a pull request. Directly pushing to the main branch is not allowed.

Examples

Conditional ProMPs

Probabilistic Movement Primitives (ProMPs) define distributions over trajectories that can be conditioned on viapoints. In this example, we plot the resulting posterior distribution after conditioning on varying start positions.

Script

Potential Field of 2D DMP

A Dynamical Movement Primitive defines a potential field that superimposes several components: transformation system (goal-directed movement), forcing term (learned shape), and coupling terms (e.g., obstacle avoidance).

Script

DMP with Final Velocity

Not all DMPs allow a final velocity > 0. In this case we analyze the effect of changing final velocities in an appropriate variation of the DMP formulation that allows to set the final velocity.

Script

ProMPs

The LASA Handwriting dataset learned with ProMPs. The dataset consists of 2D handwriting motions. The first and third column of the plot represent demonstrations and the second and fourth column show the imitated ProMPs with 1-sigma interval.

Script

Contextual ProMPs

We use a dataset of Mronga and Kirchner (2021) with 10 demonstrations per 3 different panel widths that were obtained through kinesthetic teaching. The panel width is considered to be the context over which we generalize with contextual ProMPs. Each color in the above visualizations corresponds to a ProMP for a different context.

Script

Dependencies that are not publicly available:

Dual Cartesian DMP

We offer specific dual Cartesian DMPs to control dual-arm robotic systems like humanoid robots.

Scripts: Open3D, PyBullet

Dependencies that are not publicly available:

Coupled Dual Cartesian DMP

We can introduce a coupling term in a dual Cartesian DMP to constrain the relative position, orientation, or pose of two end-effectors of a dual-arm robot.

Scripts: Open3D, PyBullet

Dependencies that are not publicly available:

Propagation of DMP Distribution to State Space

If we have a distribution over DMP parameters, we can propagate them to state space through an unscented transform.

Script

Dependencies that are not publicly available:

Funding

This library has been developed initially at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI GmbH) in Bremen. At this phase the work was supported through a grant of the German Federal Ministry of Economic Affairs and Energy (BMWi, FKZ 50 RA 1701).

You might also like...
Spatially-Adaptive Pixelwise Networks for Fast Image Translation, CVPR 2021

Image Translation with ASAPNets Spatially-Adaptive Pixelwise Networks for Fast Image Translation, CVPR 2021 Webpage | Paper | Video Installation insta

Implementation of CVPR 2021 paper
Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer"

SCGAN Implementation of CVPR 2021 paper "Spatially-invariant Style-codes Controlled Makeup Transfer" Prepare The pre-trained model is avaiable at http

Toward Spatially Unbiased Generative Models (ICCV 2021)
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)
Official PyTorch code for Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021)

Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution (MANet, ICCV2021) This repository is the official PyTorc

Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)
Simple Tensorflow implementation of Toward Spatially Unbiased Generative Models (ICCV 2021)

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper] : Toward Spatially Unbiased Generative Models (ICCV 2021) Abstract Recent image gener

Official PyTorch implementation of BlobGAN: Spatially Disentangled Scene Representations

BlobGAN: Spatially Disentangled Scene Representations Official PyTorch Implementation Paper | Project Page | Video | Interactive Demo BlobGAN.mp4 This

Optimized primitives for collective multi-GPU communication

NCCL Optimized primitives for inter-GPU communication. Introduction NCCL (pronounced "Nickel") is a stand-alone library of standard communication rout

HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives
HashNeRF-pytorch - Pure PyTorch Implementation of NVIDIA paper on Instant Training of Neural Graphics primitives

HashNeRF-pytorch Instant-NGP recently introduced a Multi-resolution Hash Encodin

Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Comments
  • Modify the initial method of T in dmp_open_loop_quaternion() to avoid numerical rounding errors

    Modify the initial method of T in dmp_open_loop_quaternion() to avoid numerical rounding errors

    the origin initial method about T in dmp_open_loop_quaternion() is:T = [start_t]; while t <run_t: last_t=t, t+=dt,T.append(t), which will cause the numerical rounding errors when run_t = 2.99. In detail: when t = 2.07, t+= dt t should be 2.08, but is the real scene, it will become 2.0799999999. And it will cause the length of Yr becomes 301. In the End, I am greenhand about Github, I am sorry if I do something wrong operation about repo.

    opened by CodingCatMountain 5
  • A Problem about CartesianDMP due to the parameter 'dt'...

    A Problem about CartesianDMP due to the parameter 'dt'...

    Hi, this package is very very very good, it do really help me to learn about the Learn from Demonstrations. But last night, I find a problem about open_loop, which is function included in the CartesianDMP class. The problem is the length about the python list, which named Yr in this function. And I have checked the source code, I found : My Y, which is passed to cartesian_dmp.imitate(T,Y), it's length is 600; And Yp in CartesianDMP.open_loop(), which returned by dmp_open_loop, it's length is 600, which are correct, but the length Yr in CartesianDMP.open_loop() is 601. I believe the relationship about T and dt in dmp_open_loop() and dmp_open_loop_quaternion() has some problem. Please Check! The T in dmp_open_loop() is initialized via this way : T=np.arange(start_t, run_t + dt, dt) , and the T in dmp_open_loop_quaternion() is initialized via this way: T=[start_t], which start_t is 0.0, and in a loop , last_t = t, t+=dt, T.append(t).

    opened by CodingCatMountain 4
  • CartesianDMP object has no attribute forcing_term

    CartesianDMP object has no attribute forcing_term

    I would like to save the weights of a trained CartesianDMP. There is no overloaded function get_weights() so I guess the one from the DMP base class should work. However, when calling it it raises the error in the title:

    AttributeError: 'CartesianDMP' object has no attribute 'forcing_term'
    

    Do you know what could be the issue here? Thanks in advance.

    opened by buschbapti 3
  • Can this repo for the periodic motion and orientation?

    Can this repo for the periodic motion and orientation?

    Thanks for sharing. Though DMPs are widely used to encode point-to-point movements, implementing the periodic DMP for translation and orientation is still challenging. Can this repository achieve these? If possible, would you provide any examples?

    opened by HongminWu 1
Releases(0.5.0)
Owner
DFKI Robotics Innovation Center
Research group at the German Research Center for Artificial Intelligence. For a list of our other open source contributuions click the link below:
DFKI Robotics Innovation Center
PyTorch implementation of spectral graph ConvNets, NIPS’16

Graph ConvNets in PyTorch October 15, 2017 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbresson

Xavier Bresson 287 Jan 04, 2023
Code for the paper "There is no Double-Descent in Random Forests"

Code for the paper "There is no Double-Descent in Random Forests" This repository contains the code to run the experiments for our paper called "There

2 Jan 14, 2022
Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)

Discovering Non-monotonic Autoregressive Orderings with Variational Inference Description This package contains the source code implementation of the

Xuanlin (Simon) Li 10 Dec 29, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF

Google 625 Dec 30, 2022
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

125 Dec 26, 2022
Source code for our paper "Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures"

Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Code for the Multiplex Molecular Graph Neural Network (M

shzhang 59 Dec 10, 2022
(CVPR 2022) Pytorch implementation of "Self-supervised transformers for unsupervised object discovery using normalized cut"

(CVPR 2022) TokenCut Pytorch implementation of Tokencut: Self-supervised Transformers for Unsupervised Object Discovery using Normalized Cut Yangtao W

YANGTAO WANG 200 Jan 02, 2023
This repository contains all data used for writing a research paper Multiple Object Trackers in OpenCV: A Benchmark, presented in ISIE 2021 conference in Kyoto, Japan.

OpenCV-Multiple-Object-Tracking Python is version 3.6.7 to install opencv: pip uninstall opecv-python pip uninstall opencv-contrib-python pip install

6 Dec 19, 2021
You can draw the corresponding bounding box into the image and save it according to the result file (txt format) run by the tracker.

You can draw the corresponding bounding box into the image and save it according to the result file (txt format) run by the tracker.

Huiyiqianli 42 Dec 06, 2022
Train Yolov4 using NBX-Jobs

yolov4-trainer-nbox Train Yolov4 using NBX-Jobs. Use the powerfull functionality available in nbox-SDK repo to train a tiny-Yolo v4 model on Pascal VO

Yash Bonde 1 Jan 12, 2022
An image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testingAn image base contains 490 images for learning (400 cars and 90 boats), and another 21 images for testing

SVM Données Une base d’images contient 490 images pour l’apprentissage (400 voitures et 90 bateaux), et encore 21 images pour fait des tests. Prétrait

Achraf Rahouti 3 Nov 30, 2021
Repository for code and dataset for our EMNLP 2021 paper - “So You Think You’re Funny?”: Rating the Humour Quotient in Standup Comedy.

AI-OpenMic Dataset The dataset is available for download via the follwing link. Repository for code and dataset for our EMNLP 2021 paper - “So You Thi

6 Oct 26, 2022
Trading Gym is an open source project for the development of reinforcement learning algorithms in the context of trading.

Trading Gym Trading Gym is an open-source project for the development of reinforcement learning algorithms in the context of trading. It is currently

Dimitry Foures 535 Nov 15, 2022
Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script.

clip-text-decoder Generate text captions for images from their CLIP embeddings. Includes PyTorch model code and example training script. Example Predi

Frank Odom 36 Dec 21, 2022
Example scripts for the detection of lanes using the ultra fast lane detection model in ONNX.

Example scripts for the detection of lanes using the ultra fast lane detection model in ONNX.

Ibai Gorordo 35 Sep 07, 2022
The easiest tool for extracting radiomics features and training ML models on them.

Simple pipeline for experimenting with radiomics features Installation git clone https://github.com/piotrekwoznicki/ClassyRadiomics.git cd classrad pi

Piotr Woźnicki 17 Aug 04, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
Diffusion Probabilistic Models for 3D Point Cloud Generation (CVPR 2021)

Diffusion Probabilistic Models for 3D Point Cloud Generation [Paper] [Code] The official code repository for our CVPR 2021 paper "Diffusion Probabilis

Shitong Luo 323 Jan 05, 2023
Self-Supervised Multi-Frame Monocular Scene Flow (CVPR 2021)

Self-Supervised Multi-Frame Monocular Scene Flow 3D visualization of estimated depth and scene flow (overlayed with input image) from temporally conse

Visual Inference Lab @TU Darmstadt 85 Dec 22, 2022
The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

The code for our paper "NSP-BERT: A Prompt-based Zero-Shot Learner Through an Original Pre-training Task —— Next Sentence Prediction"

Sun Yi 201 Nov 21, 2022