VR-Caps: A Virtual Environment for Active Capsule Endoscopy

Overview

VR-Caps: A Virtual Environment for Capsule Endoscopy

Overview

We introduce a virtual active capsule endoscopy environment developed in Unity that provides a simulation platform to generate synthetic data as well as a test bed to develop and test algorithms. Using that environment, we perform various evaluations for common robotics and computer vision tasks of active capsule endoscopy such as classification, pose and depth estimation, area coverage, autonomous navigation, learning control of endoscopic capsule robot with magnetic field inside GI-tract organs, super-resolution, etc. The demonstration of our virtual environment is available on YouTube.

Our main contributions are as follows:

  • We propose synthetic data generating tool for creating fully labeled data.
  • Using our simulation environment, we provide a platform for testing numerous highly realistic scenarios.

See Summary of our work for details and our Paper.

Getting Started

1. Installation

The VR-Caps contains several components:

  • Unity
  • ML-Agents
  • SOFA
  • MagnetoDynamics
  • SC-SfMLearner

Consequently, to install and use the VR-Caps you will need to:

Clone the VR-Caps Repository

Now that you have installed Unity and Python, you can now clone this repository.

git clone https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy.git

Now, you will need open Unity Hub and simply create a new Unity project by adding VR-Caps-Unity. Then, simply open the Unity by clicking on the project you just added. Please note that we have tested the environment on Unity Version 2019.3.3f1.

The opening scene Clinic Setup is our default scene. You can navigate other scenes by Scenes.

2. Creating Synthetic Data

For data creation tool, please open Record Collect scene from Scenes.

This will open a scene where there is one of our GI system models is already placed and a capsule with a mono camera and a light source is attached on it.

You will need Unity Recorder which can be installed using Unity Package Manager (see image)

After installing Unity Recorder, navigate to Recorder Window and open the Recorder panel.

On the panel, click Add New Recorders and then select Image Sequence and AOV Image Sequence for RGB image recording and depth recording respctively.

Adjust image resolutions from Capture Output Resolution and FPS(Frame per second) from Target Value.

Importing new models to the scene

You can import other models from GI-Organs folder by simply dragging the model to the scene. You will notice that the imported model has no texture.

In order to add texture, you need to navigate Organs folder and simply drag material files (.mat) to corresponding 3D organs (For example, Colon Material.mat to Colon that can be selected in the Hierarchy window under the Prefab.)

Generating 3D organs from scratch

One can also generate 3D organs from different patients using the publicly available Cancer Imaging Archive. Please select a CT data in the DICOM format among the dataset for colon or stomach (see below image). Please note that the DICOM images consist of two sets, one taken in the supine position and the other in the prone position. The supine position DICOM images were used since that is the patient’s position during the capsule endoscopy session

After downlading DICOM data, use InVesalius or any similar softwarer to convert the DICOM images to 3D objects. The software provides an automatic selection of the regions desired to be converted, which in our case is the Soft Tissue. Then a surface will be created on the selected regions constructing the corresponding 3D model, which is exported as a Wavefront (.obj) file.

The 3D model is then imported into Blender for further processing which includes removal of bones, fat, skin, and other artifacts that the imported model has so that only the geometries of the colon, small intestines and stomach remain. Please note that not all converted 3D models includes the whole colon and intestines, these models should be discarded.

As some models consist of a large number of mesh which makes it hard to process, we reduced the number of mesh by using another software called MeshLab, using an algorithm called Quadric Edge Collapse Decimation for mesh simplification. It reduces the face number of a mesh while preserving its boundaries and normals.

Please note that, due to some imperfections on the CT data, you may need to fill the gaps and fix the topology of the organs. We used Blender for this operation. Please make sure that there is no missing parts in the 3D organs and the connections and the openings between the stomach and small intestines, and the small intestines and colon are all set.

Generating Disease Classes

We create pipeline to mimic 3 classes of diseases in our environment (Polpys with various shapes and sizes, Ulcerative Collitis and Hemorrage for 3 and 4 different amount and severity levels respectively) that can be used to train/test disease classification algorithms.

Polyps

In the Cancer Imaging Archive, you can also find different models of the organs with the cancerous lumps that can be used to mimic real shaped polyps with realistic locations of occurence. Firts, navigate to relavant class in the archive and download the corresponding DICOM format. Then, by following the same steps explained above, you can create 3D organ with polyps. In order to attain the texture particularly generated for polyps, you should use Blender or a similar software to manually depart meshes for the regions of polyp occurences and save them as different models. Then in Unity you can attain polyps texture Polyps.mat in the where other organ textures are located.

Ulcertive Collitis and Hemorrage

Unlike Polyps, Ulcer and Hemorrage do not have differ in the topology of 3D organs but in texture. Therefore, we generate specific textures for these classes. In order to create organs with these diseases, please select and attain a texture from the textures folder where other .mat files exist.

Various camera designs

As there are commercially available options in capsule camera designs in the wireless capsule endoscopy, in our environment, we extend the standard mono camera capsule to different designs such as stereo, dual and 360° camera). You can select these options from the Capsules folder under this folder

Adjusting camera parameters and post processing effects

Adjusting camera parameters can be used for both mimicing real endoscopy cameras and augmenting the data.

You can use the camera intrinsic parameters that we get by calibrating MiroCam and PillCam capsule endoscope cameras or play with them to generate augmented data.

To adjust Unity Camera, use parameters on Inspector window (e.g,, Field of View, Sensor Size, Focal Length etc.) Set the average of and for the focal length, 2x optical center ( and ) for sensor size X and Y.

Please note that image resolution is adjusted on Recorder.

Specular reflection which occurs on the surface of organs due to interaction of light source can also be adjusted by the Coat Mask parameter on Unity's Inspector window.

Post-processing effects that HDRP (High Definition Rendering Pipeline) provides (specular reflection, vignette, lens distortion, chromatic aberration and depth of field) can also be adjusted with relevant parameters.

Movement of the capsule

For the actuation of the capsule, we have placed a cylinder magnet inside the capsule and a ball magnet attached to the robot arm. Simulation of the magnetic field is modeled as dipole-dipole interactions by using (MagnetoDynamics).

The default Scene has two infinitesimal dipoles (MagneticDipole prefabs) embedded in Rigidbodies of the DiscMagnet(child object of Capsule) and BallMagnet objects. In Unity’s Scene and Hierarchy views, you can see that MagneticDipoles are attached to the them. Please note that every Scene that uses Magnetodynamics must contain an ElectromagneticFieldController that can be found inside the Magnetodynamics folder. Just drag it to anywhere in the scene to activate magnetic field.

If the InverseKinematic.cs script is activated, the robotic arm will also be moving as you move the ball magnet (either by a script or manually).

It is also possible to move the capsule directly without any electro magnetic force on it. To do that, add the CapsuleMovement.cs script to the capsule and control it by keyboard arrows.

The capsule camera can also be controlled if you add MouseCameraController.cs script to the camera. In that way, the capsule camera will look to the direction pointed by the mouse.

3. Tasks

3.1. Area Coverage

We use Unity's ML-Agents Toolkit to train a Deep Reinforcement Learning (DRL) based active control method that has a goal of learning a maximum coverage policy for human organ monitoring within a minimal operation time. We create a different project for the area coverage task (VR-Caps-Unity-RL) . To reproduce results or train you own control policy please follow the instructions provided here

3.2. Pose and Depth Estimation

To illustrate the effectiveness of VR-Caps environment in terms of neural network training for pose and depth estimation, we trained a state-of-the-art method, SC-SfMLearner algorithm, using synthetic data created on VR-Caps. The results showed on the paper can be reproduced by using the models given in the drive. Virtual Pre Training folder is for the model trained only with the synthetic data. Model 1 corresponds to the case when there is only real data is used (without virtual pretraining) and Model 2 is the case where we use synthetic data for pre-training and then fine-tune with the real data from EndoSLAM dataset. For the pretraining, we used the data on drive. The test sets for colon are Colon_Traj5_HighCam and Colon_Traj5_LowCam and for small instesine SmallInstesine_Traj1_HighCam and SmallInstesine_Traj4_HighCam.

For pose estimation, ATE and RPE calculations can be done by using this script To extend the test cases, you can generate new data as explained above and train new SC-SfM networks and test on both real or synthetic data.

For depth estimation, we test on both virtual and real endoscopy data (Kvasir and Redlesion datasets).

3.3. 3D Reconstruction

In this work, we propose and evaluate a hybrid 3D reconstruction technique. To exemplify the effectiveness of Unity data, we compare the results of reconstructions both on real and synthetic data.

3.4. Disease Classification

We mimic the 3 diseases (i.e., Polyps, Haemorrhage and Ulcerative Collitis) in our simulation environment. Hemorrage and Ulcerative Collitis are created based on the real endoscopy images from Kvasir dataset mimicking the abnormal mucosa texture. As polyps are not only distintive in texture but also in topology, we use CT scans from patients who have polyps and use this 3D morphological information to reconstruct 3D organs inside our environment. instances with different severities ranging from grade 1 to grade 4, three different grades of ulcerative colitis, and different polyps instances with various shapes and sizes.

3.5. Super Resolution

We benchmarked the effectivity of the Unity environment using Deep Super-Resolution for Capsule Endoscopy (EndoL2H) network based on the dilemma of high camera resolution coming with increasing the size of the optics and the sensor array.

Results

Visual demonstration of all tasks done on this work and their results are as follows: For more details, please visit the article.

Frequently Asked Questions

Limitations

Reference

If you find our work useful in your research or if you use parts of this code please consider citing our paper:

@misc{incetan2020vrcaps,
      title={VR-Caps: A Virtual Environment for Capsule Endoscopy}, 
      author={Kagan Incetan and Ibrahim Omer Celik and Abdulhamid Obeid and Guliz Irem Gokceler and Kutsev Bengisu Ozyoruk and Yasin Almalioglu and Richard J. Chen and Faisal Mahmood and Hunter Gilbert and Nicholas J. Durr and Mehmet Turan},
      year={2020},
      eprint={2008.12949},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Code to accompany the paper "Finding Bipartite Components in Hypergraphs", which is published in NeurIPS'21.

Finding Bipartite Components in Hypergraphs This repository contains code to accompany the paper "Finding Bipartite Components in Hypergraphs", publis

Peter Macgregor 5 May 06, 2022
Unofficial Implementation of Oboe (SIGCOMM'18').

Oboe-Reproduce This is the unofficial implementation of the paper "Oboe: Auto-tuning video ABR algorithms to network conditions, Zahaib Akhtar, Yun Se

Tianchi Huang 13 Nov 04, 2022
DIT is a DTLS MitM proxy implemented in Python 3. It can intercept, manipulate and suppress datagrams between two DTLS endpoints and supports psk-based and certificate-based authentication schemes (RSA + ECC).

DIT - DTLS Interception Tool DIT is a MitM proxy tool to intercept DTLS traffic. It can intercept, manipulate and/or suppress DTLS datagrams between t

52 Nov 30, 2022
Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

Organseg dags - The repository contains the codebase for multi-organ segmentation with directed acyclic graphs (DAGs) in CT.

yzf 1 Jun 12, 2022
MaskTrackRCNN for video instance segmentation based on mmdetection

MaskTrackRCNN for video instance segmentation Introduction This repo serves as the official code release of the MaskTrackRCNN model for video instance

411 Jan 05, 2023
It's final year project of Diploma Engineering. This project is based on Computer Vision.

Face-Recognition-Based-Attendance-System It's final year project of Diploma Engineering. This project is based on Computer Vision. Brief idea about ou

Neel 10 Nov 02, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
Unofficial PyTorch Implementation of AHDRNet (CVPR 2019)

AHDRNet-PyTorch This is the PyTorch implementation of Attention-guided Network for Ghost-free High Dynamic Range Imaging (CVPR 2019). The official cod

Yutong Zhang 4 Sep 08, 2022
Bare bones use-case for deploying a containerized web app (built in streamlit) on AWS.

Containerized Streamlit web app This repository is featured in a 3-part series on Deploying web apps with Streamlit, Docker, and AWS. Checkout the blo

Collin Prather 62 Jan 02, 2023
TensorFlow implementation of original paper : https://github.com/hszhao/PSPNet

Keras implementation of PSPNet(caffe) Implemented Architecture of Pyramid Scene Parsing Network in Keras. For the best compability please use Python3.

VladKry 386 Dec 29, 2022
Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks

This is the code associated with the paper Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks, published at CVPR 2020.

Thomas Roddick 219 Dec 20, 2022
Calibrate your listeners! Robust communication-based training for pragmatic speakers. Findings of EMNLP 2021.

Calibrate your listeners! Robust communication-based training for pragmatic speakers Rose E. Wang, Julia White, Jesse Mu, Noah D. Goodman Findings of

Rose E. Wang 3 Apr 02, 2022
Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Learning to Bootstrap for Combating Label Noise This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise

21 Apr 09, 2022
Codes for "Template-free Prompt Tuning for Few-shot NER".

EntLM The source codes for EntLM. Dependencies: Cuda 10.1, python 3.6.5 To install the required packages by following commands: $ pip3 install -r requ

77 Dec 27, 2022
The mini-MusicNet dataset

mini-MusicNet A music-domain dataset for multi-label classification Music transcription is sequence-to-sequence prediction problem: given an audio per

John Thickstun 4 Nov 09, 2022
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
Official code for the paper "Self-Supervised Prototypical Transfer Learning for Few-Shot Classification"

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification This repository contains the reference source code and pre-trained models (

EPFL INDY 44 Nov 04, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

442 Dec 16, 2022