Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation

Overview

AutomaticUSnavigation

Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation. We will start by investigating navigation in the XCAT phantom volumes, then integrate our cycleGAN model to the pipeline to perform navigation in US domain. We also test navigation on clinical CT scans.

example of agents navigating in a test XCAT phantom volume (not seen at train time)

The agent is in control of moving 3 points in a 3D volume, which will sample the corresponding plane. We aim to model the agent to learn to move towards 4-chamber views. We define such views as the plane passing through the centroids of the Left Ventricle, Right Ventricle and Right Atrium (XCAT volumes come with semantic segmentations). We reward the agent when it moves towards this goal plane, and when the number of pixels of tissues of interest present in the current plane increase (see rewards/rewards.py fro more details). Furthermore, we add some good-behaviour inducing reards: we maximize the area of the triangle spanned by the agents and we penalize the agents for moving outside of the volumes boundaries. The former encourages smooth transitions (if the agents are clustered close together we would get abrupt transitions) the latter makes sure that the agents stay within the boundaries of the environment. The following animation shows agents navigating towards a 4-Chamber view on a test XCAT volume, agents are initialized randomly within the volume.

trained agent acting greedily.
Fig 1: Our best agent acting greedily for 250 steps after random initialization. Our full agent consists of 3 sub-agents, each controlling the movement of 1 point in a 3D space. As each agent moves around the 3 points will sample a particular view of the CT volume.

example of agents navigating in clinical CTs

We than upgrade our pipeline generating realistic fake CT volumes using Neural Style Transfer on our XCAT volumes. We will generate volumes which aim to resemble CT texture while retaining XCAT content. We train the agents in the same manner on this new simulated environment and we test practicality both on unseen fake CT volumes and on clinical volumes from LIDC-IDRI dataset.

trained agent acting greedily on fake CT. trained agent acting greedily on real CT.
Fig 2: Left) Our best agent acting greedily on a test fake CT volume for 125 steps after random initialization. Right) same agents tested on clinical CT data.

example of agents navigating on synthetic US

We couple our navigation framework with a CycleGAN that transforms XCAT slices into US images on the fly. Our CycleGAN model is not perfect yet and we are limited to contrain the agent within +/- 20 pixels from the goal plane. Note that we invert intensities of the XCAT images to facilitate the translation process.

trained agent acting greedily on US environment.
Fig 1: Our best agent acting greedily for 50 steps after initialization within +/- 20 pixels from the goal plane. The XCAT volume is used a proxy for navigation in US domain.

usage

  1. clone the repo and install dependencies
git clone [email protected]:CesareMagnetti/AutomaticUSnavigation.git
cd AutomaticUSnavigation
python3 -m venv env
source env/bin/activate
pip install -r requirements
  1. if you don't want to integrate the script with weights and biases run scripts with the additional --wandb disabled flag.

  2. train our best agents on 15 XCAT volumes (you must generate these yourself). It will save results to ./results/ and checkpoints to ./checkpoints/. Then test the agent 100 times on all available volumes (in our case 20) and generate some test trajectories to visualize results.

python train.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --recurrent --batch_size 8 --update_every 15

python test.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14,
samp15,samp16,samp17,samp18,samp19 --n_runs 2000 --load latest --fname quantitative_metrics

python test_trajectory.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp15,samp16,samp17,samp18,samp19 --n_steps 250 --load latest
  1. train our best agent on the fake CT volumes (we can then test on real CT data).
python make_XCAT_volumes_realistic.py --dataroot [path/to/XCAT/volumes] --saveroot [path/to/save/fakeCT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14,
samp15,samp16,samp17,samp18,samp19 --style_imgs [path/to/style/realCT/images] --window 3

python train.py --name 15Volume_CT_both_terminateOscillate_Recurrent_smoothedVolumes_lessSteps --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --recurrent --batch_size 8 --update_every 15 --dataroot [path/to/fakeCT/volumes] --load_size 128 --no_preprocess --n_steps_per_episode 125 --buffer_size 25000 --randomize_intensities

python test_trajectory.py --name 15Volume_CT_both_terminateOscillate_Recurrent_smoothedVolumes_lessSteps --dataroot [path-to/realCT/volumes] --volume_ids 128_LIDC-IDRI-0101,128_LIDC-IDRI-0102 --load latest --n_steps 125 --no_preprocess --realCT
  1. train our best agent on fake US environment
python train.py --name 15Volumes_easyObjective20_CT2USbestModel_bestRL --easy_objective --n_steps_per_episode 50 --buffer_size 10000 --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --dataroot [path/to/XCAT/volumes(must rotate)] --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --batch_size 8 --update_every 12 --recurrent --CT2US --ct2us_model_name bestCT2US

python test_trajectory.py --name 15Volumes_easyObjective20_CT2USbestModel_bestRL --dataroot [path/to/XCAT/volumes(must rotate)] --volume_ids samp15,samp16,samp17,samp18,samp19 --easy_objective --n_steps 50 --CT2US --ct2us_model_name bestCT2US --load latest

Acknowledgements

Work done with the help of Hadrien Reynaud. Our CT2US models are built upon the CT2US simulation repo, which itself is heavily based on CycleGAN-and-pix2pix and CUT repos.

Owner
Cesare Magnetti
Cesare Magnetti
Robotic Process Automation in Windows and Linux by using Driagrams.net BPMN diagrams.

BPMN_RPA Robotic Process Automation in Windows and Linux by using BPMN diagrams. With this Framework you can draw Business Process Model Notation base

23 Dec 14, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
Implementing Vision Transformer (ViT) in PyTorch

Lightning-Hydra-Template A clean and scalable template to kickstart your deep learning project 🚀 ⚡ 🔥 Click on Use this template to initialize new re

2 Dec 24, 2021
HNECV: Heterogeneous Network Embedding via Cloud model and Variational inference

HNECV This repository provides a reference implementation of HNECV as described in the paper: HNECV: Heterogeneous Network Embedding via Cloud model a

4 Jun 28, 2022
Official PyTorch Implementation of "AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent Forecasting".

AgentFormer This repo contains the official implementation of our paper: AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent Forecast

Ye Yuan 161 Dec 23, 2022
Implementation of TabTransformer, attention network for tabular data, in Pytorch

Tab Transformer Implementation of Tab Transformer, attention network for tabular data, in Pytorch. This simple architecture came within a hair's bread

Phil Wang 420 Jan 05, 2023
Code for "Single-view robot pose and joint angle estimation via render & compare", CVPR 2021 (Oral).

Single-view robot pose and joint angle estimation via render & compare Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic CVPR: Conference on C

Yann Labbé 51 Oct 14, 2022
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

Ali 234 Nov 14, 2022
adversarial_multi_armed_bandit_variable_plays

Adversarial Multi-Armed Bandit with Variable Plays This code is for paper: Adversarial Online Learning with Variable Plays in the Evasion-and-Pursuit

Yiyang Wang 1 Oct 28, 2021
HarDNeXt: Official HarDNeXt repository

HarDNeXt-Pytorch HarDNeXt: A Stage Receptive Field and Connectivity Aware Convolution Neural Network HarDNeXt-MSEG for Medical Image Segmentation in 0

5 May 26, 2022
https://sites.google.com/cornell.edu/recsys2021tutorial

Counterfactual Learning and Evaluation for Recommender Systems (RecSys'21 Tutorial) Materials for "Counterfactual Learning and Evaluation for Recommen

yuta-saito 45 Nov 10, 2022
TensorRT examples (Jetson, Python/C++)(object detection)

TensorRT examples (Jetson, Python/C++)(object detection)

Nobuo Tsukamoto 53 Dec 22, 2022
Attention-based Transformation from Latent Features to Point Clouds (AAAI 2022)

Attention-based Transformation from Latent Features to Point Clouds This repository contains a PyTorch implementation of the paper: Attention-based Tr

12 Nov 11, 2022
This code finds bounding box of a single human mouth.

This code finds bounding box of a single human mouth. In comparison to other face segmentation methods, it is relatively insusceptible to open mouth conditions, e.g., yawning, surgical robots, etc. T

iThermAI 4 Nov 27, 2022
BARTScore: Evaluating Generated Text as Text Generation

This is the Repo for the paper: BARTScore: Evaluating Generated Text as Text Generation Updates 2021.06.28 Release online evaluation Demo 2021.06.25 R

NeuLab 196 Dec 17, 2022
This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021.

inverse_attention This repository provides the official implementation of 'Learning to ignore: rethinking attention in CNNs' accepted in BMVC 2021. Le

Firas Laakom 5 Jul 08, 2022
Pca-on-genotypes - Mini bioinformatics project - PCA on genotypes

Mini bioinformatics project: PCA on genotypes This repo contains the code from t

Maria Nattestad 8 Dec 04, 2022
An introduction to satellite image analysis using Python + OpenCV and JavaScript + Google Earth Engine

A Gentle Introduction to Satellite Image Processing Welcome to this introductory course on Satellite Image Analysis! Satellite imagery has become a pr

Edward Oughton 32 Jan 03, 2023
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability PCACE is a new algorithm for ranking neurons in a CNN architecture in order

4 Jan 04, 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy for sma

THUDM 540 Dec 30, 2022