PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks.

Related tags

AlgorithmsPICO
Overview

GitHub license Read the Docs GitHub issues GitHub forks GitHub stars

PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks. It is developed by the Multi-Agent Artificial Intelligence Lab (MAIL) in East China Normal University and the AI Research Institute in Geekplus Technology Co., Ltd. PICO is constructed based on the framework of PRIMAL:Pathfinding via Reinforcement and Imitation Multi-Agent Learning and focuses more on the collision avoidance rather than manual post-processing when collision occurs. Exploiting the design of decentralized communication and implicit priority in these secenarios benifits better path finding. To emphasis, more details about PICO can be found in our paper Multi-Agent Path Finding with Prioritized Communication Learning, which is accepted by ICRA 2022.

Distributed Assembly

Reinforcement learning code to train multiple agents to collaboratively plan their paths in a 2D grid world.

Key Components of PICO

  • pico_training.py: Multi-agent training code. Training runs on GPU by default, change line "with tf.device("/gpu:0"):" to "with tf.device("/cpu:0"):" to train on CPU (much slower).Researchers can also flexibly customized their configuration in this file.
  • mapf_gym.py: Multi-agent path planning gym environment, in which agents learn collective path planning.
  • pico_testing.py: Code to run systematic validation tests of PICO, pulled from the saved_environments folder as .npy files and output results in a given folder (by default: test_result).

Installation

git clone https://github.com/mail-ecnu/PICO.git
cd PICO
conda env create -f conda_env.yml
conda activate PICO-dev

Before compilation: compile cpp_mstar code

  • cd into the od_mstar3 folder.
  • python3 setup.py build_ext (may need --inplace as extra argument).
  • copy so object from build/lib.*/ at the root of the od_mstar3 folder.
  • Check by going back to the root of the git folder, running python3 and "import cpp_mstar"

Quick Examples

pico_training.py:

episode_count          = 0
MAX_EPISODE            = 20
EPISODE_START          = episode_count
gamma                  = .95 # discount rate for advantage estimation and reward discounting
#moved network parameters to ACNet.py
EXPERIENCE_BUFFER_SIZE = 128
GRID_SIZE              = 11 #the size of the FOV grid to apply to each agent
ENVIRONMENT_SIZE       = (10,20)#(10,70) the total size of the environment (length of one side)
OBSTACLE_DENSITY       = (0,0.3) #(0,0.5) range of densities
DIAG_MVMT              = False # Diagonal movements allowed?
a_size                 = 5 + int(DIAG_MVMT)*4
SUMMARY_WINDOW         = 10
NUM_META_AGENTS        = 3
NUM_THREADS            = 8 #int(multiprocessing.cpu_count() / (2 * NUM_META_AGENTS))
# max_episode_length     = 256 * (NUM_THREADS//8)
max_episode_length     = 256
NUM_BUFFERS            = 1 # NO EXPERIENCE REPLAY int(NUM_THREADS / 2)
EPISODE_SAMPLES        = EXPERIENCE_BUFFER_SIZE # 64
LR_Q                   = 2.e-5
ADAPT_LR               = True
ADAPT_COEFF            = 5.e-5 #the coefficient A in LR_Q/sqrt(A*steps+1) for calculating LR
load_model             = False
RESET_TRAINER          = False
gifs_path              = 'gifs'
from datetime import datetime
TIMESTAMP = "{0:%Y-%m-%dT%H-%M/}".format(datetime.now())

GLOBAL_NET_SCOPE       = 'global'

#Imitation options
PRIMING_LENGTH         = 2500    #0 number of episodes at the beginning to train only on demonstrations
DEMONSTRATION_PROB     = 0.5

Then

python pico_training.py

Custom testing

Edit pico_testing.py according to the training setting. By default, the model is loaded from the model folder.

Then

python pico_testing.py

Requirements

  • Python 3.4
  • Cython 0.28.4
  • OpenAI Gym 0.9.4
  • Tensorflow 1.3.1
  • Numpy 1.13.3
  • matplotlib
  • imageio (for GIFs creation)
  • tk
  • networkx (if using od_mstar.py and not the C++ version)

Citing our work

If you use this repo in your work, please consider citing the corresponding paper (first two authors contributed equally):

@InProceedings{lichen2022mapf,
  title =    {Multi-Agent Path Finding with Prioritized Communication Learning},
  author =   {Wenhao, Li* and Hongjun, Chen* and Bo, Jin and Wenzhe, Tan and Hongyuan, Zha and Xiangfeng, Wang},
  booktitle =    {ICRA},
  year =     {2022},
  pdf =      {https://arxiv.org/pdf/2202.03634},
  url =      {https://arxiv.org/abs/2202.03634},
}

License

Licensed under the MIT License.

A simple library for implementing common design patterns.

PyPattyrn from pypattyrn.creational.singleton import Singleton class DummyClass(object, metaclass=Singleton): # DummyClass is now a Singleton!

1.7k Jan 01, 2023
The DarkRift2 networking framework written in Python 3

DarkRiftPy is Darkrift2 written in Python 3. The implementation is fully compatible with the original version. So you can write a client side on Python that connects to a Darkrift2 server written in

Anton Dobryakov 6 May 23, 2022
Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm

pyruct Python Package for Reflection Ultrasound Computed Tomography (RUCT) Delay And Sum (DAS) Algorithm The imaging setup is explained in these paper

Berkan Lafci 21 Dec 12, 2022
Genetic Algorithm for Robby Robot based on Complexity a Guided Tour by Melanie Mitchell

Robby Robot Genetic Algorithm A Genetic Algorithm based Robby the Robot in Chapter 9 of Melanie Mitchell's book Complexity: A Guided Tour Description

Matthew 2 Dec 01, 2022
Ralebel is an interpreted, Haitian Creole programming language that aims to help Haitians by starting with the fundamental algorithm

Ralebel is an interpreted, Haitian Creole programming language that aims to help Haitians by starting with the fundamental algorithm

Lub Lorry Lamysère 5 Dec 01, 2022
frePPLe - open source supply chain planning

frePPLe Open source supply chain planning FrePPLe is an easy-to-use and easy-to-implement open source advanced planning and scheduling tool for manufa

frePPLe 385 Jan 06, 2023
Official implementation of "Path Planning using Neural A* Search" (ICML-21)

Path Planning using Neural A* Search (ICML 2021) This is a repository for the following paper: Ryo Yonetani*, Tatsunori Taniai*, Mohammadamin Barekata

OMRON SINIC X 82 Jan 07, 2023
The test data, code and detailed description of the AW t-SNE algorithm

AW-t-SNE The test data, code and result of the AW t-SNE algorithm Structure of the folder Datasets: This folder contains two datasets, the MNIST datas

1 Mar 09, 2022
With this algorithm you can see all best positions for a Team.

Best Positions Imagine that you have a favorite team, and you want to know until wich position your team can reach With this algorithm you can see all

darlyn 4 Jan 28, 2022
Implementation of core NuPIC algorithms in C++

NuPIC Core This repository contains the C++ source code for the Numenta Platform for Intelligent Computing (NuPIC)

Numenta 270 Nov 19, 2022
Repository for Comparison based sorting algorithms in python

Repository for Comparison based sorting algorithms in python. This was implemented for project one submission for ITCS 6114 Data Structures and Algorithms under the guidance of Dr. Dewan at the Unive

Devashri Khagesh Gadgil 1 Dec 20, 2021
This repository is not maintained

This repository is no longer maintained, but is being kept around for educational purposes. If you want a more complete algorithms repo check out: htt

Nic Young 2.8k Dec 30, 2022
A Python project for optimizing the 8 Queens Puzzle using the Genetic Algorithm implemented in PyGAD.

8QueensGenetic A Python project for optimizing the 8 Queens Puzzle using the Genetic Algorithm implemented in PyGAD. The project uses the Kivy cross-p

Ahmed Gad 16 Nov 13, 2022
Programming Foundations Algorithms With Python

Programming-Foundations-Algorithms Algorithms purpose to solve a specific proplem with a sequential sets of steps for instance : if you need to add di

omar nafea 1 Nov 01, 2021
Implementation of an ordered dithering algorithm used in computer graphics

Ordered Dithering Project In this project, we use an ordered dithering method to turn an RGB image, first to a gray scale image and then, turn the gra

1 Oct 26, 2021
PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks.

PICO is an algorithm for exploiting Reinforcement Learning (RL) on Multi-agent Path Finding tasks. It is developed by the Multi-Agent Artificial Intel

21 Dec 20, 2022
Infomap is a network clustering algorithm based on the Map equation.

Infomap Infomap is a network clustering algorithm based on the Map equation. For detailed documentation, see mapequation.org/infomap. For a list of re

347 Dec 23, 2022
Our implementation of Gillespie's Stochastic Simulation Algorithm (SSA)

SSA Our implementation of Gillespie's Stochastic Simulation Algorithm (SSA) Requirements python =3.7 numpy pandas matplotlib pyyaml Command line usag

Anoop Lab 1 Jan 27, 2022
Parameterising Simulated Annealing for the Travelling Salesman Problem

Parameterising Simulated Annealing for the Travelling Salesman Problem Abstract The Travelling Salesman Problem is a well known NP-Hard problem. Given

Gary Sun 55 Jun 15, 2022
CLI Eight Puzzle mini-game featuring BFS, DFS, Greedy and A* searches as solver algorithms.

🕹 Eight Puzzle CLI Jogo do quebra-cabeças de 8 peças em linha de comando desenvolvido para a disciplina de Inteligência Artificial. Escrito em python

Lucas Nakahara 1 Jun 30, 2021