Deep Reinforcement Learning based autonomous navigation for quadcopters using PPO algorithm.

Overview

PPO-based Autonomous Navigation for Quadcopters

license

This repository contains an implementation of Proximal Policy Optimization (PPO) for autonomous navigation in a corridor environment with a quadcopter. There are blocks having circular opening for the drone to go through for each 4 meters. The expectation is that the agent navigates through these openings without colliding with blocks. This project currently runs only on Windows since Unreal environments were packaged for Windows.

🛠️ Libraries & Tools

Overview

The training environment has 9 sections with different textures and hole positions. The agent starts at these sections randomly. The starting point of the agent is also random within a specific region in the yz-plane.

Observation Space

  • State is in the form of a RGB image taken by the front camera of the agent.
  • Image shape: 50 x 50 x 3

Action Space

  • There are 9 discrete actions.

Environment setup to run the codes

#️⃣ 1. Clone the repository

git clone https://github.com/bilalkabas/PPO-based-Autonomous-Navigation-for-Quadcopters

#️⃣ 2. From Anaconda command prompt, create a new conda environment

I recommend you to use Anaconda or Miniconda to create a virtual environment.

conda create -n ppo_drone python==3.8

#️⃣ 3. Install required libraries

Inside the main directory of the repo

conda activate ppo_drone
pip install -r requirements.txt

#️⃣ 4. (Optional) Install Pytorch for GPU

You must have a CUDA supported NVIDIA GPU.

Details for installation

For this project, I used CUDA 11.0 and the following conda installation command to install Pytorch:

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch

#️⃣ 4. Edit settings.json

Content of the settings.json should be as below:

The setting.json file is located at Documents\AirSim folder.

{
    "SettingsVersion": 1.2,
    "LocalHostIp": "127.0.0.1",
    "SimMode": "Multirotor",
    "ClockSpeed": 20,
    "ViewMode": "SpringArmChase",
    "Vehicles": {
        "drone0": {
            "VehicleType": "SimpleFlight",
            "X": 0.0,
            "Y": 0.0,
            "Z": 0.0,
            "Yaw": 0.0
        }
    },
    "CameraDefaults": {
        "CaptureSettings": [
            {
                "ImageType": 0,
                "Width": 50,
                "Height": 50,
                "FOV_Degrees": 120
            }
        ]
    }
  }

How to run the training?

Make sure you followed the instructions above to setup the environment.

#️⃣ 1. Download the training environment

Go to the releases and download TrainEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and start the training

So, inside the repository

python main.py

How to run the pretrained model?

Make sure you followed the instructions above to setup the environment. To speed up the training, the simulation runs at 20x speed. You may consider to change the "ClockSpeed" parameter in settings.json to 1.

#️⃣ 1. Download the test environment

Go to the releases and download TestEnv.zip. After downloading completed, extract it.

#️⃣ 2. Now, you can open up environment's executable file and run the trained model

So, inside the repository

python policy_run.py

Training results

The trained model in saved_policy folder was trained for 280k steps.

Picture2

Test results

The test environment has different textures and hole positions than that of the training environment. For 100 episodes, the trained model is able to travel 17.5 m on average and passes through 4 holes on average without any collision. The agent can pass through at most 9 holes in test environment without any collision.

Author

License

This project is licensed under the GNU Affero General Public License.

You might also like...
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.
Tackling Obstacle Tower Challenge using PPO & A2C combined with ICM.

Obstacle Tower Challenge using Deep Reinforcement Learning Unity Obstacle Tower is a challenging realistic 3D, third person perspective and procedural

Episodic Transformer (E.T.) is a novel attention-based architecture for vision-and-language navigation. E.T. is based on a multimodal transformer that encodes language inputs and the full episode history of visual observations and actions. A clean and robust Pytorch implementation of PPO on continuous action space.
A clean and robust Pytorch implementation of PPO on continuous action space.

PPO-Continuous-Pytorch I found the current implementation of PPO on continuous action space is whether somewhat complicated or not stable. And this is

PPO Lagrangian in JAX

PPO Lagrangian in JAX This repository implements PPO in JAX. Implementation is tested on the safety-gym benchmark. Usage Install dependencies using th

GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.
GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles using deep neural networks.

GndNet: Fast Ground plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. Authors: Anshul Paigwar, Ozgur Erkent, David Sierra Gonzale

Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX
Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX

CQL-JAX This repository implements Conservative Q Learning for Offline Reinforcement Reinforcement Learning in JAX (FLAX). Implementation is built on

Reinforcement-learning - Repository of the class assignment questions for the course on reinforcement learning

DSE 314/614: Reinforcement Learning This repository containing reinforcement lea

This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.
This solves the autonomous driving issue which is supported by deep learning technology. Given a video, it splits into images and predicts the angle of turning for each frame.

Self Driving Car An autonomous car (also known as a driverless car, self-driving car, and robotic car) is a vehicle that is capable of sensing its env

A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

Comments
  • A warning I met during I perform

    A warning I met during I perform "python policy_run.py"

    I have followed each step as suggested by the readme. However, I encounter the problem as follow:

    WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED WARNING:tornado.general:Connect error on fd 336: WSAECONNREFUSED Traceback (most recent call last): File "policy_run.py", line 14, in env = DummyVecEnv([lambda: Monitor( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in init self.envs = [fn() for fn in env_fns] File "E:\Anaconda\envs\PPO_drone\lib\site-packages\stable_baselines3\common\vec_env\dummy_vec_env.py", line 25, in self.envs = [fn() for fn in env_fns] File "policy_run.py", line 15, in gym.make( File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 235, in make return registry.make(id, **kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 129, in make env = spec.make(kwargs) File "E:\Anaconda\envs\PPO_drone\lib\site-packages\gym\envs\registration.py", line 90, in make env = cls(_kwargs) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 169, in init super(TestEnv, self).init(ip_address, image_shape, env_config) File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 19, in init self.setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 174, in setup_flight super(TestEnv, self).setup_flight() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim_env.py", line 36, in setup_flight self.drone.reset() File "E:\Project\PPO_based_ANfQ\PPO-based-Autonomous-Navigation-for-Quadcopters\scripts\airsim\client.py", line 26, in reset self.client.call('reset') File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\session.py", line 41, in call return self.send_request(method, args).get() File "E:\Anaconda\envs\PPO_drone\lib\site-packages\msgpackrpc\future.py", line 43, in get raise self._error msgpackrpc.error.TransportError: Retry connection over the limit

    I would be grateful if anyone could tell me how to fix this.

    opened by XiAoSSuper 1
Releases(v1.0.0-windows)
Owner
Bilal Kabas
BSc., Electrical & Electronics Engineering, Undergraduate Researcher: Robotics, Computer Vision, ML & DL
Bilal Kabas
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Xingyi Zhou 407 Dec 30, 2022
[NeurIPS 2019] Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, Tengyu Ma This is the offi

Kaidi Cao 528 Jan 01, 2023
Creating multimodal multitask models

Fusion Brain Challenge The English version of the document can be found here. Обновления 01.11 Мы выкладываем пример данных, аналогичных private test

Sber AI 43 Nov 28, 2022
Low-dose Digital Mammography with Deep Learning

Impact of loss functions on the performance of a deep neural network designed to restore low-dose digital mammography ====== This repository contains

WANG-AXIS 6 Dec 13, 2022
ICSS - Interactive Continual Semantic Segmentation

Presentation This repository contains the code of our paper: Weakly-supervised c

Alteia 9 Jul 23, 2022
Segmentation models with pretrained backbones. PyTorch.

Python library with Neural Networks for Image Segmentation based on PyTorch. The main features of this library are: High level API (just two lines to

Pavel Yakubovskiy 6.6k Jan 06, 2023
[ICML 2020] DrRepair: Learning to Repair Programs from Error Messages

DrRepair: Learning to Repair Programs from Error Messages This repo provides the source code & data of our paper: Graph-based, Self-Supervised Program

Michihiro Yasunaga 155 Jan 08, 2023
PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation

deep-hist PyTorch implementation of Histogram Layers from DeepHist: Differentiable Joint and Color Histogram Layers for Image-to-Image Translation PyT

Winfried Lötzsch 10 Dec 06, 2022
Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based Analysis Framework"

Privacy-Aware Inverse RL (PRIL) Analysis Framework Code, environments, and scripts for the paper: "How Private Is Your RL Policy? An Inverse RL Based

1 Dec 06, 2021
This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming"

Coresets via Bilevel Optimization This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming" ht

Zalán Borsos 51 Dec 30, 2022
Global-Local Attention for Emotion Recognition

Global-Local Attention for Emotion Recognition Requirements Python 3 Install tensorflow (or tensorflow-gpu) = 2.0.0 Install some other packages pip i

Minh Nhat Le 15 Apr 21, 2022
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

Snapdragon Lee 2 Dec 16, 2022
Official pytorch implementation of paper "Inception Convolution with Efficient Dilation Search" (CVPR 2021 Oral).

IC-Conv This repository is an official implementation of the paper Inception Convolution with Efficient Dilation Search. Getting Started Download Imag

Jie Liu 111 Dec 31, 2022
IPATool-py: download ipa easily

IPATool-py Python version of IPATool! Installation pip3 install -r requirements.txt Usage Quickstart: download app with specific bundleId into DIR: p

159 Dec 30, 2022
The final project for "Applying AI to Wearable Device Data" course from "AI for Healthcare" - Udacity.

Motion Compensated Pulse Rate Estimation Overview This project has 2 main parts. Develop a Pulse Rate Algorithm on the given training data. Then Test

Omar Laham 2 Oct 25, 2022
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

27 Dec 01, 2022
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

Deep Reinforcement Learning Agents This repository contains a collection of reinforcement learning algorithms written in Tensorflow. The ipython noteb

Arthur Juliani 2.2k Jan 01, 2023
Duke Machine Learning Winter School: Computer Vision 2022

mlwscv2002 Welcome to the Duke Machine Learning Winter School: Computer Vision 2022! The MLWS-CV includes 3 hands-on training sessions on implementing

Duke + Data Science (+DS) 9 May 25, 2022
Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022)

Source code for EquiDock: Independent SE(3)-Equivariant Models for End-to-End Rigid Protein Docking (ICLR 2022) Please cite "Independent SE(3)-Equivar

Octavian Ganea 154 Jan 02, 2023