Learning High-Speed Flight in the Wild

Overview

Learning High-Speed Flight in the Wild

This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, please check the project webpage.

Cover

Paper, Video, and Datasets

If you use this code in an academic context, please cite the following publication:

Paper: Learning High-Speed Flight in the Wild

Video (Narrated): YouTube

Datasets: Zenodo

Science Paper: DOI

@inproceedings{Loquercio2021Science,
  title={Learning High-Speed Flight in the Wild},
    author={Loquercio, Antonio and Kaufmann, Elia and Ranftl, Ren{\'e} and M{\"u}ller, Matthias and Koltun, Vladlen and Scaramuzza, Davide},
      booktitle={Science Robotics}, 
      year={2021}, 
      month={October}, 
} 

Installation

Requirements

The code was tested with Ubuntu 20.04, ROS Noetic, Anaconda v4.8.3., and gcc/g++ 7.5.0. Different OS and ROS versions are possible but not supported.

Before you start, make sure that your compiler versions match gcc/g++ 7.5.0. To do so, use the following commands:

sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 100
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 100

Step-by-Step Procedure

Use the following commands to create a new catkin workspace and a virtual environment with all the required dependencies.

export ROS_VERSION=noetic
mkdir agile_autonomy_ws
cd agile_autonomy_ws
export CATKIN_WS=./catkin_aa
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color
cd src

git clone [email protected]:uzh-rpg/agile_autonomy.git
vcs-import < agile_autonomy/dependencies.yaml
cd rpg_mpl_ros
git submodule update --init --recursive

#install extra dependencies (might need more depending on your OS)
sudo apt-get install libqglviewer-dev-qt5

# Install external libraries for rpg_flightmare
sudo apt install -y libzmqpp-dev libeigen3-dev libglfw3-dev libglm-dev

# Install dependencies for rpg_flightmare renderer
sudo apt install -y libvulkan1 vulkan-utils gdb

# Add environment variables (Careful! Modify path according to your local setup)
echo 'export RPGQ_PARAM_DIR=/home/
   
   catkin_aa/src/rpg_flightmare' >> ~/.bashrc

Now open a new terminal and type the following commands.

# Build and re-source the workspace
catkin build
. ../devel/setup.bash

# Create your learning environment
roscd planner_learning
conda create --name tf_24 python=3.7
conda activate tf_24
conda install tensorflow-gpu
pip install rospkg==1.2.3,pyquaternion,open3d,opencv-python

Now download the flightmare standalone available at this link, extract it and put in the flightrender folder.

Let's Fly!

Once you have installed the dependencies, you will be able to fly in simulation with our pre-trained checkpoint. You don't need necessarely need a GPU for execution. Note that if the network can't run at least at 15Hz, you won't be able to fly successfully.

Lauch the simulation! Open a terminal and type:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Run the Network in an other terminal:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python test_trajectories.py --settings_file=config/test_settings.yaml

Change execution speed or environment

You can change the average speed at which the policy will fly as well as the environment type by changing the following files.

Environment Change:

rosed agile_autonomy flightmare.yaml

Set either the spawn_trees or spawn_objects to true. Doing both at the same time is possible but would make the environment too dense for navigation. Also adapt the spacings parameter in test_settings.yaml to the environment.

Speed Change:

rosed agile_autonomy default.yaml

Edit the test_time_velocity and maneuver_velocity to the required speed. Note that the ckpt we provide will work for all speeds in the range [1,10] m/s. However, to reach the best performance at a specific speed, please consider finetuning the ckpt at the desired speed (see code below).

Train your own navigation policy

There are two ways in which you can train your own policy. One easy and one more involved. The trained checkpoint can then be used to control a physical platform (if you have one!).

Use pre-collected dataset

The first method, requiring the least effort, is to use a dataset that we pre-collected. The dataset can be found at this link. This dataset was used to train the model we provide and collected at an average speed of 7 m/s. To do this, adapt the file train_settings.yaml to point to the train and test folder and run:

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python train.py --settings_file=config/train_settings.yaml

Feel free to ablate the impact of each parameter!

Collect your own dataset

You can use the following commands to generate data in simulation and train your model on it. Note that training a policy from scratch could require a lot of data, and depending on the speed of your machine this could take several days. Therefore, we always recommend finetuning the provided checkpoint to your use case. As a general rule of thumb, you need a dataset with comparable size to ours to train a policy from scratch, but only 1/10th of it to finetune.

Generate data

To train or finetune a policy, use the following commands: Launch the simulation in one terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
roslaunch agile_autonomy simulation.launch

Launch data collection (with dagger) in an other terminal

cd agile_autonomy_ws
source catkin_aa/devel/setup.bash
conda activate tf_24
python dagger_training.py --settings_file=config/dagger_settings.yaml

It is possible to change parameters (number of rollouts, dagger constants, tracking a global trajectory, etc. ) in the file dagger_settings.yaml. Keep in mind that if you change the network or input, you will need to adapt the file test_settings.yaml for compatibility.

When training from scratch, follow a pre-computed global trajectory to give consistent labels. To activate this, you need to put to true the flag perform_global_planning in default.yaml and label_generation.yaml. Note that this will make the simulation slower (a global plan has to be computed at each iteration). The network will not have access to this global plan, but only to the straight (possibly in collision) reference.

Visualize the Data

You can visualize the generated trajectories in open3d using the visualize_trajectories.py script.

python visualize_trajectories.py --data_dir /PATH/TO/rollout_21-09-21-xxxx --start_idx 0 --time_steps 100 --pc_cutoff_z 2.0 --max_traj_to_plot 100

The result should more or less look as the following:

Labels

Test the Network

To test the network you trained, adapt the test_settings.yaml with the new checkpoint path. You might consider putting back the flag perform_global_planning in default.yaml to false to make the simulation faster. Then follow the instructions in the above section (Let's Fly!) to test.

Ackowledgements

We would like to thank Yunlong Song and Selim Naji for their help with the implementations of the simulation environment. The code for global planning is strongly inspired by the one of Search-based Motion Planning for Aggressive Flight in SE(3).

Owner
Robotics and Perception Group
Robotics and Perception Group
Multi-Modal Machine Learning toolkit based on PyTorch.

简体中文 | English TorchMM 简介 多模态学习工具包 TorchMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 TorchMM 初始版本 v1.0 特性 丰富的任务场景:工具

njustkmg 1 Jan 05, 2022
PyTorch implementation HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections

HoroPCA This code is the official PyTorch implementation of the ICML 2021 paper: HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projec

HazyResearch 52 Nov 14, 2022
A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Alfred 1 Jun 05, 2022
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"

This is the official repository of my book "Deep Learning with PyTorch Step-by-Step". Here you will find one Jupyter notebook for every chapter in the book.

Daniel Voigt Godoy 340 Jan 01, 2023
Categorizing comments on YouTube into different categories.

Youtube Comments Categorization This repo is for categorizing comments on a youtube video into different categories. negative (grievances, complaints,

Rhitik 5 Nov 26, 2022
BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

SAIT (Samsung Advanced Institute of Technology) 12 Apr 27, 2022
Code for the paper "M2m: Imbalanced Classification via Major-to-minor Translation" (CVPR 2020)

M2m: Imbalanced Classification via Major-to-minor Translation This repository contains code for the paper "M2m: Imbalanced Classification via Major-to

79 Oct 13, 2022
pip install python-office

🍬 python for office 👉 http://www.python4office.cn/ 👈 🌎 English Documentation 📚 简介 Python-office 是一个 Python 自动化办公第三方库,能解决大部分自动化办公的问题。而且每个功能只需一行代码,

程序员晚枫 272 Dec 29, 2022
Neural network for digit classification powered by cuda

cuda_nn_mnist Neural network library for digit classification powered by cuda Resources The library was built to work with MNIST dataset. python-mnist

Nikita Ardashev 1 Dec 20, 2021
A program to recognize fruits on pictures or videos using yolov5

Yolov5 Fruits Detector Requirements Either Linux or Windows. We recommend Linux for better performance. Python 3.6+ and PyTorch 1.7+. Installation To

Fateme Zamanian 30 Jan 06, 2023
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
A2LP for short, ECCV2020 spotlight, Investigating SSL principles for UDA problems

Label-Propagation-with-Augmented-Anchors (A2LP) Official codes of the ECCV2020 spotlight (label propagation with augmented anchors: a simple semi-supe

20 Oct 27, 2022
Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks

Spontaneous Facial Micro Expression Recognition using 3D Spatio-Temporal Convolutional Neural Networks Abstract Facial expression recognition in video

Bogireddy Sai Prasanna Teja Reddy 103 Dec 29, 2022
unofficial pytorch implementation of RefineGAN

RefineGAN unofficial pytorch implementation of RefineGAN (https://arxiv.org/abs/1709.00753) for CSMRI reconstruction, the official code using tensorpa

xinby17 5 Jul 21, 2022
Devkit for 3D -- Some utils for 3D object detection based on Numpy and Pytorch

D3D Devkit for 3D: Some utils for 3D object detection and tracking based on Numpy and Pytorch Please consider siting my work if you find this library

Jacob Zhong 27 Jul 07, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
Label Mask for Multi-label Classification

LM-MLC 一种基于完型填空的多标签分类算法 1 前言 本文主要介绍本人在全球人工智能技术创新大赛【赛道一】设计的一种基于完型填空(模板)的多标签分类算法:LM-MLC,该算法拟合能力很强能感知标签关联性,在多个数据集上测试表明该算法与主流算法无显著性差异,在该比赛数据集上的dev效果很好,但是由

52 Nov 20, 2022
A proof of concept ai-powered Recaptcha v2 solver

Recaptcha Fullauto I've decided to open source my old Recaptcha v2 solver. My latest version will be opened sourced this summer. I am hoping this proj

Nate 60 Dec 20, 2022
Neural network chess engine trained on Gary Kasparov's games.

Neural Chess It's not the best chess engine, but it is a chess engine. Proof of concept neural network chess engine (feed-forward multi-layer perceptr

3 Jun 22, 2022
Self-driving car env with PPO algorithm from stable baseline3

Self-driving car with RL stable baseline3 Most of the project develop from https://github.com/GerardMaggiolino/Gym-Medium-Post Please check it out! Th

Sornsiri.P 7 Dec 22, 2022