SGoLAM - Simultaneous Goal Localization and Mapping

Related tags

Deep LearningSGoLAM
Overview

SGoLAM - Simultaneous Goal Localization and Mapping

PyTorch implementation of the MultiON runner-up entry, SGoLAM: Simultaneous Goal Localization and Mapping [Talk Video]. Our method does not employ any training of neural networks, but shows competent performance in the MultiON benchmark. In fact, we outperform the winning entry by a large margin in terms of success rate.

alt text

We encourage future participants of the MultiON challenge to use our code as a starting point for implementing more sophisticated navigation agents. If you have any questions on running SGoLAM please leave an issue.

Notes on Installation

To run experiments locally/on a server, follow the 'bag of tricks' below:

  1. Please abide by the steps provided in the original MultiON repository. (Don't bother looking at other repositories!)
  2. Along the installation process, numerous dependency errors will occur. Don't look for other workarounds and just humbly install what is missing.
  3. For installing Pytorch and other CUDA dependencies, it seems like the following command works: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch.
  4. By the way, habitat-lab installation is much easier than habitat-sim. You don't necessarily need to follow the instructions provided in the MultiON repository for habitat-lab. Just go directly to the habitat-lab repository and install habitat-lab. However, for habitat-sim, you must follow MultiON's directions; or a pile of bugs will occur.
  5. One python evaluate.py is run, a horrifying pile of dependency errors will occur. Now we will go over some of the prominent ones.
  6. To solve AttributeError: module 'attr' has no attribute 's', run pip uninstall attr and then run pip install attrs.
  7. To solve ModuleNotFoundError: No module named 'imageio', run pip install imageio-ffmpeg.
  8. To solve ImportError: ModuleNotFoundError: No module named 'magnum', run pip install build/deps/magnum-bindings/src/python.
  9. The last and most important 'trick' is to google errors. The Habitat team seems to be doing a great job answering GitHub issues. Probably someone has already ran into the error you are facing.
  10. If additional 'tricks' are found, feel free to share by appending to the list starting from here. `

Docker Sanity Check (Last Modified: 2021.03.26:20:11)

A number of commands to take for docker sanity check.

Login

First, login to the dockerhub repository. As our accounts don't support private repositories with multiple collaborators, we need to share a single ID. For the time being let's use my ID. Type the following command

docker login

Now one will be prompted a user ID and PW. Please type ID: esteshills PW: 82magnolia.

Pull Image

I have already built an image ready for preliminary submission. It can be easily pulled using the following command.

docker pull esteshills/multion_test:tagname

Run Evaluation

To make an evaluation for standard submission, run the following command. Make sure DATA_DIR and ORIG_DATA_DIR from scripts/test_docker.sh are modified before running.

cd scripts/
./test_docker.sh

Playing around with Docker Images

One may want to further examine the docker image. Run the following command.

cd scripts/
./test_docker_bash.sh

Again, make sure DATA_DIR and ORIG_DATA_DIR from scripts/test_docker.sh are modified before running. Note that the commands provided in the MultiON repository can be run inside the container. For example:

python habitat_baselines/run.py --exp-config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --agent-type no-map --run-type eval

In order to run other baselines, i) modify the checkpoint path in the .yaml file, ii) download the model checkpoint, iii) change the agent type.

Preventing Hassles with Docker (Last Modified: 2021.04.08:09:07)

Now we probably don't need to develop with docker. Just plug in your favorite agent following the instructions provided below.

Plug-and-Play New Agents

One can easily test new agents by providing the file name containing agent implementation. To implement a new agent, please refer to agents/example.py. To test a new agent and get evaluation results, run the following command (this is an example for the no_map baseline).

python evaluate.py --agent_module no_map_walker --exp_config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --checkpoint_path model_checkpoints/ckpt.0.pth --no_fill

In addition, one can change the number of episodes to be tested. However, this feature is only available in the annotated branch, as it requires a slight modification in the core habitat repository. Run the following command to change the number of episodes. While it will not produce any bugs in the main branch as well, the argument will have no effect.

python evaluate.py --agent_module no_map_walker --exp_config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --checkpoint_path model_checkpoints/ckpt.0.pth --no_fill --num_episodes 100

Plug-and-Play New Agents from Local Host

Running Agents

Suppose one has some implementations of navigation agents that are not yet pushed to agents/. These could be tested on-the-fly using a handy script provided in scripts. First, put all the agent implementations inside extern_agents/, similar to implementations in agents/. Then run the following command with the agent module you are trying to run, for example if the new agent module is located in extern_agents/new_agent.py, run

./scripts/test_docker_agent.sh new_agent

Make sure the agents are located in the extern_agents/ folder. This way, there is no need to directly hassle with docker; docker is merely used as a black box for running evaluations.

Now suppose one needs to debug the agent in the docker environment. This could be done by running the following script; it will open bash with extern_agents/ mounted.

./scripts/test_docker_agent_bash.sh

To run evaluations inside the docker container, run the following command with the agent module name (in this case new_agent) provided.

./scripts/extern_eval.sh new_agent

Playing Agent Episodes with Video

Agent trajectories per episode can be visualized with the scripts in scripts/. Again, put all the agent implementations inside extern_agents/. Then run the following command with the agent module you are trying to run, for example if the new agent module is located in extern_agents/new_agent.py, run

./scripts/test_docker_agent_video.sh new_agent 

Make sure the mount paths are set correctly inside ./scripts/test_docker_agent_video.sh.

To run evaluations inside the docker container, run the following command with the agent module name (in this case new_agent) and video save directory (in this case ./test_dir) provided.

./scripts/extern_eval_video.sh new_agent ./test_dir

Caveats

The original implementations assume two GPUs to be given. Therefore bugs may occur if only a single GPU is present. In this case do not run the docker scripts directly, as it will return errors. Instead, connect to a docker container with bash and first modify the baseline .yaml configuration so that it only uses a single GPU. Then, run the *_eval*.sh scripts. I am planning on remedying this issue with a similar plug-and-play fashion, but for the time being, stick to this procedure.

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning This repository is the official implementation of CARE.

ChongjianGE 89 Dec 02, 2022
CausaLM: Causal Model Explanation Through Counterfactual Language Models

CausaLM: Causal Model Explanation Through Counterfactual Language Models Authors: Amir Feder, Nadav Oved, Uri Shalit, Roi Reichart Abstract: Understan

Amir Feder 39 Jul 10, 2022
PyStan, a Python interface to Stan, a platform for statistical modeling. Documentation: https://pystan.readthedocs.io

PyStan NOTE: This documentation describes a BETA release of PyStan 3. PyStan is a Python interface to Stan, a package for Bayesian inference. Stan® is

Stan 229 Dec 29, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

308 Jan 04, 2023
A baseline code for VSPW

A baseline code for VSPW Preparation Download VSPW dataset The VSPW dataset with extracted frames and masks is available here.

28 Aug 22, 2022
The codebase for Data-driven general-purpose voice activity detection.

Data driven GPVAD Repository for the work in TASLP 2021 Voice activity detection in the wild: A data-driven approach using teacher-student training. S

Heinrich Dinkel 75 Nov 27, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
Implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021).

[PDF] | [Slides] The official implementation of Learning Gradient Fields for Molecular Conformation Generation (ICML 2021 Long talk) Installation Inst

MilaGraph 117 Dec 09, 2022
Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted)

NLOS-OT Official implementation of NLOS-OT: Passive Non-Line-of-Sight Imaging Using Optimal Transport (IEEE TIP, accepted) Description In this reposit

Ruixu Geng(耿瑞旭) 16 Dec 16, 2022
最新版本yolov5+deepsort目标检测和追踪,支持5.0版本可训练自己数据集

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

422 Dec 30, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners

An pytorch implementation of Masked Autoencoders Are Scalable Vision Learners This is a coarse version for MAE, only make the pretrain model, the fine

FlyEgle 214 Dec 29, 2022
A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK

Pytorch-MBNet A pytorch implementation of MBNET: MOS PREDICTION FOR SYNTHESIZED SPEECH WITH MEAN-BIAS NETWORK Training To train a new model, please ru

46 Dec 28, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

93 Nov 06, 2022
A standard framework for modelling Deep Learning Models for tabular data

PyTorch Tabular aims to make Deep Learning with Tabular data easy and accessible to real-world cases and research alike.

801 Jan 08, 2023
Implementation of average- and worst-case robust flatness measures for adversarial training.

Relating Adversarially Robust Generalization to Flat Minima This repository contains code corresponding to the MLSys'21 paper: D. Stutz, M. Hein, B. S

David Stutz 13 Nov 27, 2022
Deep Implicit Moving Least-Squares Functions for 3D Reconstruction

DeepMLS: Deep Implicit Moving Least-Squares Functions for 3D Reconstruction This repository contains the implementation of the paper: Deep Implicit Mo

103 Dec 22, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
StorSeismic: An approach to pre-train a neural network to store seismic data features

StorSeismic: An approach to pre-train a neural network to store seismic data features This repository contains codes and resources to reproduce experi

Seismic Wave Analysis Group 11 Dec 05, 2022
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries an

Ivy 8.2k Jan 02, 2023