Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation

Overview

AutomaticUSnavigation

Investigating automatic navigation towards standard US views integrating MARL with the virtual US environment developed in CT2US simulation. We will start by investigating navigation in the XCAT phantom volumes, then integrate our cycleGAN model to the pipeline to perform navigation in US domain. We also test navigation on clinical CT scans.

example of agents navigating in a test XCAT phantom volume (not seen at train time)

The agent is in control of moving 3 points in a 3D volume, which will sample the corresponding plane. We aim to model the agent to learn to move towards 4-chamber views. We define such views as the plane passing through the centroids of the Left Ventricle, Right Ventricle and Right Atrium (XCAT volumes come with semantic segmentations). We reward the agent when it moves towards this goal plane, and when the number of pixels of tissues of interest present in the current plane increase (see rewards/rewards.py fro more details). Furthermore, we add some good-behaviour inducing reards: we maximize the area of the triangle spanned by the agents and we penalize the agents for moving outside of the volumes boundaries. The former encourages smooth transitions (if the agents are clustered close together we would get abrupt transitions) the latter makes sure that the agents stay within the boundaries of the environment. The following animation shows agents navigating towards a 4-Chamber view on a test XCAT volume, agents are initialized randomly within the volume.

trained agent acting greedily.
Fig 1: Our best agent acting greedily for 250 steps after random initialization. Our full agent consists of 3 sub-agents, each controlling the movement of 1 point in a 3D space. As each agent moves around the 3 points will sample a particular view of the CT volume.

example of agents navigating in clinical CTs

We than upgrade our pipeline generating realistic fake CT volumes using Neural Style Transfer on our XCAT volumes. We will generate volumes which aim to resemble CT texture while retaining XCAT content. We train the agents in the same manner on this new simulated environment and we test practicality both on unseen fake CT volumes and on clinical volumes from LIDC-IDRI dataset.

trained agent acting greedily on fake CT. trained agent acting greedily on real CT.
Fig 2: Left) Our best agent acting greedily on a test fake CT volume for 125 steps after random initialization. Right) same agents tested on clinical CT data.

example of agents navigating on synthetic US

We couple our navigation framework with a CycleGAN that transforms XCAT slices into US images on the fly. Our CycleGAN model is not perfect yet and we are limited to contrain the agent within +/- 20 pixels from the goal plane. Note that we invert intensities of the XCAT images to facilitate the translation process.

trained agent acting greedily on US environment.
Fig 1: Our best agent acting greedily for 50 steps after initialization within +/- 20 pixels from the goal plane. The XCAT volume is used a proxy for navigation in US domain.

usage

  1. clone the repo and install dependencies
git clone [email protected]:CesareMagnetti/AutomaticUSnavigation.git
cd AutomaticUSnavigation
python3 -m venv env
source env/bin/activate
pip install -r requirements
  1. if you don't want to integrate the script with weights and biases run scripts with the additional --wandb disabled flag.

  2. train our best agents on 15 XCAT volumes (you must generate these yourself). It will save results to ./results/ and checkpoints to ./checkpoints/. Then test the agent 100 times on all available volumes (in our case 20) and generate some test trajectories to visualize results.

python train.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --recurrent --batch_size 8 --update_every 15

python test.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14,
samp15,samp16,samp17,samp18,samp19 --n_runs 2000 --load latest --fname quantitative_metrics

python test_trajectory.py --name 15Volume_both_terminateOscillate_Recurrent --dataroot [path/to/XCAT/volumes] --volume_ids samp15,samp16,samp17,samp18,samp19 --n_steps 250 --load latest
  1. train our best agent on the fake CT volumes (we can then test on real CT data).
python make_XCAT_volumes_realistic.py --dataroot [path/to/XCAT/volumes] --saveroot [path/to/save/fakeCT/volumes] --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14,
samp15,samp16,samp17,samp18,samp19 --style_imgs [path/to/style/realCT/images] --window 3

python train.py --name 15Volume_CT_both_terminateOscillate_Recurrent_smoothedVolumes_lessSteps --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --recurrent --batch_size 8 --update_every 15 --dataroot [path/to/fakeCT/volumes] --load_size 128 --no_preprocess --n_steps_per_episode 125 --buffer_size 25000 --randomize_intensities

python test_trajectory.py --name 15Volume_CT_both_terminateOscillate_Recurrent_smoothedVolumes_lessSteps --dataroot [path-to/realCT/volumes] --volume_ids 128_LIDC-IDRI-0101,128_LIDC-IDRI-0102 --load latest --n_steps 125 --no_preprocess --realCT
  1. train our best agent on fake US environment
python train.py --name 15Volumes_easyObjective20_CT2USbestModel_bestRL --easy_objective --n_steps_per_episode 50 --buffer_size 10000 --volume_ids samp0,samp1,samp2,samp3,samp4,samp5,samp6,samp7,samp8,samp9,samp10,samp11,samp12,samp13,samp14 --dataroot [path/to/XCAT/volumes(must rotate)] --anatomyRewardWeight 1 --planeDistanceRewardWeight 1 --incrementalAnatomyReward --termination oscillate --exploring_steps 0 --batch_size 8 --update_every 12 --recurrent --CT2US --ct2us_model_name bestCT2US

python test_trajectory.py --name 15Volumes_easyObjective20_CT2USbestModel_bestRL --dataroot [path/to/XCAT/volumes(must rotate)] --volume_ids samp15,samp16,samp17,samp18,samp19 --easy_objective --n_steps 50 --CT2US --ct2us_model_name bestCT2US --load latest

Acknowledgements

Work done with the help of Hadrien Reynaud. Our CT2US models are built upon the CT2US simulation repo, which itself is heavily based on CycleGAN-and-pix2pix and CUT repos.

Owner
Cesare Magnetti
Cesare Magnetti
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable.

Diffrax Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. Diffrax is a JAX-based library providing numerical differe

Patrick Kidger 717 Jan 09, 2023
A simple and lightweight genetic algorithm for optimization of any machine learning model

geneticml This package contains a simple and lightweight genetic algorithm for optimization of any machine learning model. Installation Use pip to ins

Allan Barcelos 8 Aug 10, 2022
Repositório para arquivos sobre o Módulo 1 do curso Top Coders da Let's Code + Safra

850-Safra-DS-ModuloI Repositório para arquivos sobre o Módulo 1 do curso Top Coders da Let's Code + Safra Para aprender mais Git https://learngitbranc

Brian Nunes 7 Dec 10, 2022
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
Simple SN-GAN to generate CryptoPunks

CryptoPunks GAN Simple SN-GAN to generate CryptoPunks. Neural network architecture and training code has been modified from the PyTorch DCGAN example.

Teddy Koker 66 Dec 15, 2022
implement of SwiftNet:Real-time Video Object Segmentation

SwiftNet The official PyTorch implementation of SwiftNet:Real-time Video Object Segmentation, which has been accepted by CVPR2021. Requirements Python

haochen wang 64 Dec 14, 2022
Tightness-aware Evaluation Protocol for Scene Text Detection

TIoU-metric Release on 27/03/2019. This repository is built on the ICDAR 2015 evaluation code. If you propose a better metric and require further eval

Yuliang Liu 206 Nov 18, 2022
A mini lib that implements several useful functions binding to PyTorch in C++.

Torch-gather A mini library that implements several useful functions binding to PyTorch in C++. What does gather do? Why do we need it? When dealing w

maxwellzh 8 Sep 07, 2022
CVPR2021 Content-Aware GAN Compression

Content-Aware GAN Compression [ArXiv] Paper accepted to CVPR2021. @inproceedings{liu2021content, title = {Content-Aware GAN Compression}, auth

52 Nov 06, 2022
EMNLP 2020 - Summarizing Text on Any Aspects

Summarizing Text on Any Aspects This repo contains preliminary code of the following paper: Summarizing Text on Any Aspects: A Knowledge-Informed Weak

Bowen Tan 35 Nov 14, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

基于 bert4keras 的一个baseline 不作任何 数据trick 单模 线上 最高可到 0.7891 # 基础 版 train.py 0.7769 # transformer 各层 cls concat 明神的trick https://xv44586.git

孙永松 7 Dec 28, 2021
Implementation of FitVid video prediction model in JAX/Flax.

FitVid Video Prediction Model Implementation of FitVid video prediction model in JAX/Flax. If you find this code useful, please cite it in your paper:

Google Research 62 Nov 25, 2022
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

Launch Platform 16 Oct 11, 2022
Dynamic hair modeling from monocular videos using deep neural networks

Dynamic Hair Modeling The source code of the networks for our paper "Dynamic hair modeling from monocular videos using deep neural networks" (SIGGRAPH

53 Oct 18, 2022
VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning

VisualGPT Our Paper VisualGPT: Data-efficient Adaptation of Pretrained Language Models for Image Captioning Main Architecture of Our VisualGPT Downloa

Vision CAIR Research Group, KAUST 140 Dec 28, 2022
Bayesian Generative Adversarial Networks in Tensorflow

Bayesian Generative Adversarial Networks in Tensorflow This repository contains the Tensorflow implementation of the Bayesian GAN by Yunus Saatchi and

Andrew Gordon Wilson 1k Nov 29, 2022
MAT: Mask-Aware Transformer for Large Hole Image Inpainting

MAT: Mask-Aware Transformer for Large Hole Image Inpainting (CVPR2022, Oral) Wenbo Li, Zhe Lin, Kun Zhou, Lu Qi, Yi Wang, Jiaya Jia [Paper] News This

254 Dec 29, 2022
Code for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction"

To run a generation experiment (either conceptnet or atomic), follow these instructions: First Steps First clone, the repo: git clone https://github.c

Antoine Bosselut 575 Jan 01, 2023