SGoLAM - Simultaneous Goal Localization and Mapping

Related tags

Deep LearningSGoLAM
Overview

SGoLAM - Simultaneous Goal Localization and Mapping

PyTorch implementation of the MultiON runner-up entry, SGoLAM: Simultaneous Goal Localization and Mapping [Talk Video]. Our method does not employ any training of neural networks, but shows competent performance in the MultiON benchmark. In fact, we outperform the winning entry by a large margin in terms of success rate.

alt text

We encourage future participants of the MultiON challenge to use our code as a starting point for implementing more sophisticated navigation agents. If you have any questions on running SGoLAM please leave an issue.

Notes on Installation

To run experiments locally/on a server, follow the 'bag of tricks' below:

  1. Please abide by the steps provided in the original MultiON repository. (Don't bother looking at other repositories!)
  2. Along the installation process, numerous dependency errors will occur. Don't look for other workarounds and just humbly install what is missing.
  3. For installing Pytorch and other CUDA dependencies, it seems like the following command works: conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch.
  4. By the way, habitat-lab installation is much easier than habitat-sim. You don't necessarily need to follow the instructions provided in the MultiON repository for habitat-lab. Just go directly to the habitat-lab repository and install habitat-lab. However, for habitat-sim, you must follow MultiON's directions; or a pile of bugs will occur.
  5. One python evaluate.py is run, a horrifying pile of dependency errors will occur. Now we will go over some of the prominent ones.
  6. To solve AttributeError: module 'attr' has no attribute 's', run pip uninstall attr and then run pip install attrs.
  7. To solve ModuleNotFoundError: No module named 'imageio', run pip install imageio-ffmpeg.
  8. To solve ImportError: ModuleNotFoundError: No module named 'magnum', run pip install build/deps/magnum-bindings/src/python.
  9. The last and most important 'trick' is to google errors. The Habitat team seems to be doing a great job answering GitHub issues. Probably someone has already ran into the error you are facing.
  10. If additional 'tricks' are found, feel free to share by appending to the list starting from here. `

Docker Sanity Check (Last Modified: 2021.03.26:20:11)

A number of commands to take for docker sanity check.

Login

First, login to the dockerhub repository. As our accounts don't support private repositories with multiple collaborators, we need to share a single ID. For the time being let's use my ID. Type the following command

docker login

Now one will be prompted a user ID and PW. Please type ID: esteshills PW: 82magnolia.

Pull Image

I have already built an image ready for preliminary submission. It can be easily pulled using the following command.

docker pull esteshills/multion_test:tagname

Run Evaluation

To make an evaluation for standard submission, run the following command. Make sure DATA_DIR and ORIG_DATA_DIR from scripts/test_docker.sh are modified before running.

cd scripts/
./test_docker.sh

Playing around with Docker Images

One may want to further examine the docker image. Run the following command.

cd scripts/
./test_docker_bash.sh

Again, make sure DATA_DIR and ORIG_DATA_DIR from scripts/test_docker.sh are modified before running. Note that the commands provided in the MultiON repository can be run inside the container. For example:

python habitat_baselines/run.py --exp-config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --agent-type no-map --run-type eval

In order to run other baselines, i) modify the checkpoint path in the .yaml file, ii) download the model checkpoint, iii) change the agent type.

Preventing Hassles with Docker (Last Modified: 2021.04.08:09:07)

Now we probably don't need to develop with docker. Just plug in your favorite agent following the instructions provided below.

Plug-and-Play New Agents

One can easily test new agents by providing the file name containing agent implementation. To implement a new agent, please refer to agents/example.py. To test a new agent and get evaluation results, run the following command (this is an example for the no_map baseline).

python evaluate.py --agent_module no_map_walker --exp_config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --checkpoint_path model_checkpoints/ckpt.0.pth --no_fill

In addition, one can change the number of episodes to be tested. However, this feature is only available in the annotated branch, as it requires a slight modification in the core habitat repository. Run the following command to change the number of episodes. While it will not produce any bugs in the main branch as well, the argument will have no effect.

python evaluate.py --agent_module no_map_walker --exp_config habitat_baselines/config/multinav/ppo_multinav_no_map.yaml --checkpoint_path model_checkpoints/ckpt.0.pth --no_fill --num_episodes 100

Plug-and-Play New Agents from Local Host

Running Agents

Suppose one has some implementations of navigation agents that are not yet pushed to agents/. These could be tested on-the-fly using a handy script provided in scripts. First, put all the agent implementations inside extern_agents/, similar to implementations in agents/. Then run the following command with the agent module you are trying to run, for example if the new agent module is located in extern_agents/new_agent.py, run

./scripts/test_docker_agent.sh new_agent

Make sure the agents are located in the extern_agents/ folder. This way, there is no need to directly hassle with docker; docker is merely used as a black box for running evaluations.

Now suppose one needs to debug the agent in the docker environment. This could be done by running the following script; it will open bash with extern_agents/ mounted.

./scripts/test_docker_agent_bash.sh

To run evaluations inside the docker container, run the following command with the agent module name (in this case new_agent) provided.

./scripts/extern_eval.sh new_agent

Playing Agent Episodes with Video

Agent trajectories per episode can be visualized with the scripts in scripts/. Again, put all the agent implementations inside extern_agents/. Then run the following command with the agent module you are trying to run, for example if the new agent module is located in extern_agents/new_agent.py, run

./scripts/test_docker_agent_video.sh new_agent 

Make sure the mount paths are set correctly inside ./scripts/test_docker_agent_video.sh.

To run evaluations inside the docker container, run the following command with the agent module name (in this case new_agent) and video save directory (in this case ./test_dir) provided.

./scripts/extern_eval_video.sh new_agent ./test_dir

Caveats

The original implementations assume two GPUs to be given. Therefore bugs may occur if only a single GPU is present. In this case do not run the docker scripts directly, as it will return errors. Instead, connect to a docker container with bash and first modify the baseline .yaml configuration so that it only uses a single GPU. Then, run the *_eval*.sh scripts. I am planning on remedying this issue with a similar plug-and-play fashion, but for the time being, stick to this procedure.

Steer OpenAI's Jukebox with Music Taggers

TagBox Steer OpenAI's Jukebox with Music Taggers! The closest thing we have to VQGAN+CLIP for music! Unsupervised Source Separation By Steering Pretra

Ethan Manilow 34 Nov 02, 2022
Implement of homography net by pytorch

HomographyNet Implement of homography net by pytorch Brief Introduction This project is based on the work Homography-Net: @article{detone2016deep, t

ronghao_CN 4 May 19, 2022
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021
Pipeline code for Sequential-GAM(Genome Architecture Mapping).

Sequential-GAM Pipeline code for Sequential-GAM(Genome Architecture Mapping). mapping whole_preprocess.sh include the whole processing of mapping. usa

3 Nov 03, 2022
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
A collection of resources on GAN Inversion.

This repo is a collection of resources on GAN inversion, as a supplement for our survey

[SIGIR22] Official PyTorch implementation for "CORE: Simple and Effective Session-based Recommendation within Consistent Representation Space".

CORE This is the official PyTorch implementation for the paper: Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao. CORE: Simple and Effective Sess

RUCAIBox 26 Dec 19, 2022
Hierarchical Memory Matching Network for Video Object Segmentation (ICCV 2021)

Hierarchical Memory Matching Network for Video Object Segmentation Hongje Seong, Seoung Wug Oh, Joon-Young Lee, Seongwon Lee, Suhyeon Lee, Euntai Kim

Hongje Seong 72 Dec 14, 2022
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 04, 2023
Exploration of some patients clinical variables.

Answer_ALS_clinical_data Exploration of some patients clinical variables. All the clinical / metadata data is available here: https://data.answerals.o

1 Jan 20, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
Applying PVT to Semantic Segmentation

Applying PVT to Semantic Segmentation Here, we take MMSegmentation v0.13.0 as an example, applying PVTv2 to SemanticFPN. For details see Pyramid Visio

35 Nov 30, 2022
Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021).

AA-RMVSNet Code for AA-RMVSNet: Adaptive Aggregation Recurrent Multi-view Stereo Network (ICCV 2021) in PyTorch. paper link: arXiv | CVF Change Log Ju

Qingtian Zhu 97 Dec 30, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 09, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
An attempt at the implementation of GLOM, Geoffrey Hinton's paper for emergent part-whole hierarchies from data

GLOM TensorFlow This Python package attempts to implement GLOM in TensorFlow, which allows advances made by several different groups transformers, neu

Rishit Dagli 32 Feb 21, 2022
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN

Overview PyTorch 0.4.1 | Python 3.6.5 Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein g

Shayne O'Brien 471 Dec 16, 2022
Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation

Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation Woncheol Shin1, Gyubok Lee1, Jiyoung Lee1, Joonseok Lee2,3, Edward Ch

Woncheol Shin 7 Sep 26, 2022
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences

NYU-VPR This repository provides the experiment code for the paper Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymiza

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 22 Sep 28, 2022