Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Overview

Legged Robots that Keep on Learning

Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, which contains code for training a simulated or real A1 quadrupedal robot to imitate various reference motions, pre-trained policies, and example training code for learning the policies.

animated

Project page: https://sites.google.com/berkeley.edu/fine-tuning-locomotion

Getting Started

  • Install MPC extension (Optional) python3 setup.py install --user

Install dependencies:

  • Install MPI: sudo apt install libopenmpi-dev
  • Install requirements: pip3 install -r requirements.txt

Training Policies in Simulation

To train a policy, run the following command:

python3 motion_imitation/run_sac.py \
--mode train \
--motion_file [path to reference motion, e.g., motion_imitation/data/motions/pace.txt] \
--int_save_freq 1000 \
--visualize
  • --mode can be either train or test.
  • --motion_file specifies the reference motion that the robot is to imitate (not needed for training a reset policy). motion_imitation/data/motions/ contains different reference motion clips.
  • --int_save_freq specifies the frequency for saving intermediate policies every n policy steps.
  • --visualize enables visualization, and rendering can be disabled by removing the flag.
  • --train_reset trains a reset policy, otherwise imitation policies will be trained according to the reference motions passed in.
  • adding --use_redq uses REDQ, otherwise vanilla SAC will be used.
  • the trained model, videos, and logs will be written to output/.

Evaluating and/or Fine-Tuning Trained Policies

We provide checkpoints for the pre-trained models used in our experiments in motion_imitation/data/policies/.

Evaluating a Policy in Simulation

To evaluate individual policies, run the following command:

python3 motion_imitation/run_sac.py \
--mode test \
--motion_file [path to reference motion, e.g., motion_imitation/data/motions/pace.txt] \
--model_file [path to imitation model checkpoint, e.g., motion_imitation/data/policies/pace.ckpt] \
--num_test_episodes [# episodes to test] \
--use_redq \
--visualize
  • --motion_file specifies the reference motion that the robot is to imitate motion_imitation/data/motions/ contains different reference motion clips.
  • --model_file specifies specifies the .ckpt file that contains the trained model motion_imitation/data/policies/ contains different pre-trained models.
  • --num_test_episodes specifies the number of episodes to run evaluation for
  • --visualize enables visualization, and rendering can be disabled by removing the flag.

Autonomous Training using a Pre-Trained Reset Controller

To fine-tune policies autonomously, add a path to a trained reset policy (e.g., motion_imitation/data/policies/reset.ckpt) and a (pre-trained) imitation policy.

python3 motion_imitation/run_sac.py \
--mode train \
--motion_file [path to reference motion] \
--model_file [path to imitation model checkpoint] \
--getup_model_file [path to reset model checkpoint] \
--use_redq \
--int_save_freq 100 \
--num_test_episodes 20 \
--finetune \
--real_robot
  • adding --finetune performs fine-tuning, otherwise hyperparameters for pre-training will be used.
  • adding --real_robot will run training on the real A1 (see below to install necessary packages for running the real A1). If this is omitted, training will run in simulation.

To run two SAC trainers, one learning to walk forward and one backward, add a reference and checkpoint for another policy and use the multitask flag.

python motion_imitation/run_sac.py \
--mode train \
--motion_file motion_imitation/data/motions/pace.txt \
--backward_motion_file motion_imitation/data/motions/pace_backward.txt \
--model_file [path to forward imitation model checkpoint] \
--backward_model_file [path to backward imitation model checkpoint] \
--getup_model_file [path to reset model checkpoint] \
--use_redq \
--int_save_freq 100 \
--num_test_episodes 20 \
--real_robot \
--finetune \
--multitask

Running MPC on the real A1 robot

Since the SDK from Unitree is implemented in C++, we find the optimal way of robot interfacing to be via C++-python interface using pybind11.

Step 1: Build and Test the robot interface

To start, build the python interface by running the following: bash cd third_party/unitree_legged_sdk mkdir build cd build cmake .. make Then copy the built robot_interface.XXX.so file to the main directory (where you can see this README.md file).

Step 2: Setup correct permissions for non-sudo user

Since the Unitree SDK requires memory locking and high-priority process, which is not usually granted without sudo, add the following lines to /etc/security/limits.conf:


   
     soft memlock unlimited

    
      hard memlock unlimited

     
       soft nice eip

      
        hard nice eip

      
     
    
   

You may need to reboot the computer for the above changes to get into effect.

Step 3: Test robot interface.

Test the python interfacing by running: 'sudo python3 -m motion_imitation.examples.test_robot_interface'

If the previous steps were completed correctly, the script should finish without throwing any errors.

Note that this code does not do anything on the actual robot.

Running the Whole-body MPC controller

To see the whole-body MPC controller in sim, run: bash python3 -m motion_imitation.examples.whole_body_controller_example

To see the whole-body MPC controller on the real robot, run: bash sudo python3 -m motion_imitation.examples.whole_body_controller_robot_example

Owner
Laura Smith
Laura Smith
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
This is the code for the paper "Contrastive Clustering" (AAAI 2021)

Contrastive Clustering (CC) This is the code for the paper "Contrastive Clustering" (AAAI 2021) Dependency python=3.7 pytorch=1.6.0 torchvision=0.8

Yunfan Li 210 Dec 30, 2022
Detecting drunk people through thermal images using Deep Learning (CNN)

Drunk Detection CNN Detecting drunk people through thermal images using Deep Learning (CNN) Dataset We used thermal images provided by Electronics Lab

Giacomo Ferretti 3 Oct 27, 2022
Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks

Assessing the Influence of Models on the Performance of Reinforcement Learning Algorithms applied on Continuous Control Tasks This is the master thesi

Giacomo Arcieri 1 Mar 21, 2022
Adversarial Autoencoders

Adversarial Autoencoders (with Pytorch) Dependencies argparse time torch torchvision numpy itertools matplotlib Create Datasets python create_datasets

Felipe Ducau 188 Jan 01, 2023
This application explain how we can easily integrate Deepface framework with Python Django application

deepface_suite This application explain how we can easily integrate Deepface framework with Python Django application install redis cache install requ

Mohamed Naji Aboo 3 Apr 18, 2022
AI Toolkit for Healthcare Imaging

Medical Open Network for AI MONAI is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its am

Project MONAI 3.7k Jan 07, 2023
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022
Implementation of ToeplitzLDA for spatiotemporal stationary time series data.

Code for the ToeplitzLDA classifier proposed in here. The classifier conforms sklearn and can be used as a drop-in replacement for other LDA classifiers. For in-depth usage refer to the learning from

Jan Sosulski 5 Nov 07, 2022
Fast convergence of detr with spatially modulated co-attention

Fast convergence of detr with spatially modulated co-attention Usage There are no extra compiled components in SMCA DETR and package dependencies are

peng gao 135 Dec 07, 2022
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプル

ByteTrack-ONNX-Sample ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプルです。 ONNXに変換したモデルも同梱しています。 変換自体を試したい方はByteT

KazuhitoTakahashi 16 Oct 26, 2022
ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

Snapdragon Lee 2 Dec 16, 2022
SemEval2022 Patronizing and Condescending Language (PCL) Detection

SemEval2022 Patronizing and Condescending Language (PCL) Detection This task is from SemEval 2022. What is Patronizing and Condescending Language (PCL

Daniel Saeedi 0 Aug 05, 2022
Multiview 3D object detection on MultiviewC dataset through moft3d.

Multiview Orthographic Feature Transformation for 3D Object Detection Multiview 3D object detection on MultiviewC dataset through moft3d. Introduction

Jiahao Ma 20 Dec 21, 2022
GNN-based Recommendation Benchmark

GRecX A Fair Benchmark for GNN-based Recommendation Homepage and Documentation Homepage: Documentation: Paper: GRecX: An Efficient and Unified Benchma

73 Oct 17, 2022
Measure WWjj polarization fraction

WlWl Polarization Measure WWjj polarization fraction Paper: arXiv:2109.09924 Notice: This code can only be used for the inference process, if you want

4 Apr 10, 2022
PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Pre

Facebook Research 437 Dec 23, 2022
Gas detection for Raspberry Pi using ADS1x15 and MQ-2 sensors

Gas detection Gas detection for Raspberry Pi using ADS1x15 and MQ-2 sensors. Description The MQ-2 sensor can detect multiple gases (CO, H2, CH4, LPG,

Filip Š 15 Sep 30, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

2.3k Jan 09, 2023