Fast Neural Representations for Direct Volume Rendering

Related tags

Deep LearningfV-SRN
Overview

Fast Neural Representations for Direct Volume Rendering

Teaser

Sebastian Weiss, Philipp Hermüller, Rüdiger Westermann

This repository contains the code and settings to reproduce all figures (and more) from the paper. https://arxiv.org/abs/2112.01579

Jump to

How to train a new network

How to reproduce the figures

Video

Watch the video

Requirements

  • NVIDIA GPU with RTX, e.g. RTX20xx or RTX30xx (we use an RTX2070)
  • CUDA 11
  • OpenGL with GLFW and GLM
  • Python 3.8 or higher, see applications/env.txt for the required packages

Tested systems:

  • Windows 10, Visual Studio 2019, CUDA 11.1, Python 3.9, PyTorch 1.9
  • Ubuntu 20.04, gcc 9.3.0, CUDA 11.1, Python 3.8, PyTorch 1.8

Installation / Project structure

The project consists of a C++/CUDA part that has to be compiled first:

  • renderer: the renderer static library, see below for noteworthy files. Files ending in .cuh and .cu are CUDA kernel files.
  • bindings: entry point to the Python bindings, after compilation leads to a python extension module pyrenderer, placed in bin
  • gui: the interactive GUI to design the config files, explore the reference datasets and the trained networks. Requires OpenGL

For compilation, we recommend CMake. For running on a headless server, specifiy -DRENDERER_BUILD_OPENGL_SUPPORT=Off -DRENDERER_BUILD_GUI=Off. Alternatively, compile-library-server.sh is provided for compilation with the built-in extension compiler of PyTorch. We use this for compilation on our headless GPU server, as it simplifies potential wrong dependencies to different CUDA, Python or PyTorch versions with different virtualenvs or conda environments.

After compiling the C++ library, the network training and evaluation is performed in Python. The python files are all found in applications:

  • applications/volumes the volumes used in the ablation studies
  • applicatiosn/config-files the config files
  • applications/common: common utilities, especially utils.py for loading the pyrenderer library and other helpers
  • applications/losses: the loss functions, including SSIM and LPIPS
  • applications/volnet: the main network code for training in inference, see below.

Noteworthy Files

Here we list and explain noteworthy files that contain important aspects of the presented method

On the side of the C++/CUDA library in renderer/ are the following files important. Note that for the various modules, multiple implementations exists, e.g. for the TF. Therefore, the CUDA-kernels are assembled on-demand using NVRTC runtime compilation.

  • Image evaluators (iimage_evaluator.h), the entry point to the renderer. Only one implementation:

    • image_evaluator_simple.h, renderer_image_evaluator_simple.cuh: Contains the loop over the pixels and generates the rays -- possibly multisampled for Monte Carlo -- from the camera
  • Ray evaluators (iray_evaluation.h), called per ray and returns the colors. They call the volume implementation to fetch the density

    • ray_evaluation_stepping.h, renderer_ray_evaluation_stepping_iso.cuh, renderer_ray_evaluation_stepping_dvr.cuh: constant stepping for isosurfaces and DVR.
    • ray_evaluation_monte_carlo.h Monte Carlo path tracing with multiple bounces, delta tracking and various phase functions
  • Volume interpolations (volume_interpolation.h). On the CUDA-side, implementations provide a functor that evaluates a position and returns the density or color at that point

    • Grid interpolation (volume_interpolation_grid.h), trilinear interpolation into a voxel grid stored in volume.h.
    • Scene Reconstruction Networks (volume_interpolation_network.h). The SRNs as presented in the paper. See the header for the binary format of the .volnet file. The proposed tensor core implementation (Sec. 4.1) can be found in renderer_volume_tensorcores.cuh

On the python side in applications/volnet/, the following files are important:

  • train_volnet: the entry point for training
  • inference.py: the entry point for inference, used in the scripts for evaluation. Also converts trained models into the binary format for the GUI
  • network.py: The SRN network specification
  • input_data.py: The loader of the input grids, possibly time-dependent
  • training_data.py: world- and screen-space data loaders, contains routines for importance sampling / adaptive resampling. The rejection sampling is implemented in CUDA for performance and called from here
  • raytracing.py: Differentiable raytracing in PyTorch, including the memory optimization from Weiss&Westermann 2021, DiffDVR

How to train

The training is launched via applications/volnet/train_volnet.py. Have a look at python train_volnet.py --help for the available command line parameters.

A typical invocation looks like this (this is how fV-SRN with Ejecta from Fig. 1 was trained)

python train_volnet.py
   config-files/ejecta70-v6-dvr.json
   --train:mode world  # instead of 'screen', Sec. 5.4
   --train:samples 256**3
   --train:sampler_importance 0.01   # importance sampling based on the density, optional, see Section 5.3
   --train:batchsize 64*64*128
   --rebuild_dataset 51   # adaptive resampling after 51 epochs, see Section 5.3
   --val:copy_and_split  # for validation, use 20% of training samples
   --outputmode density:direct  # instead of e.g. 'color', Sec. 5.3
   --lossmode density
   --layers 32:32:32  # number of hidden feature layers -> that number + 1 for the number of linear layers / weight matrices.
   --activation SnakeAlt:2
   --fouriercount 14
   --fourierstd -1  # -1 indicates NeRF-construction, positive value indicate sigma for random Fourier Features, see Sec. 5.5
   --volumetric_features_resolution 32  # the grid specification, see Sec. 5.2
   --volumetric_features_channels 16
   -l1 1  #use L1-loss with weight 1
   --lr 0.01
   --lr_step 100  #lr reduction after 100 epochs, default lr is used 
   -i 200  # number of epochs
   --save_frequency 20  # checkpoints + test visualization

After training, the resulting .hdf5 file contains the network weights + latent grid and can be compiled to our binary format via inference.py. The resulting .volnet file can the be loaded in the GUI.

How to reproduce the figures

Each figure is associated with a respective script in applications/volnet. Those scripts include the training of the networks, evaluation, and plot generation. They have to be launched with the current path pointing to applications/. Note that some of those scripts take multiple hours due to the network training.

  • Figure 1, teaser: applications/volnet/eval_CompressionTeaser.py
  • Table 1, possible architectures: applications/volnet/collect_possible_layers.py
  • Section 4.2, change to performance due to grid compression: applications/volnet/eval_VolumetricFeatures_GridEncoding
  • Figure 3, performance of the networks: applications/volnet/eval_NetworkConfigsGrid.py
  • Section 5, study on the activation functions: applications/volnet/eval_ActivationFunctions.py
  • Figure 4+5, latent grid, also includes other datasets: applications/volnet/eval_VolumetricFeatures.py
  • Figure 6, density-vs-color: applications/volnet/eval_world_DensityVsColorGrid_NoImportance.py without initial importance sampling and adaptive resampling (Fig. 6) applications/volnet/eval_world_DensityVsColorGrid.py , includes initial importance sampling, not shown applications/volnet/eval_world_DensityVsColorGrid_WithResampling.py , with initial importance sampling and adaptive resampling, improvement reported in Section 5.3
  • Table 2, Figure 7, screen-vs-world: applications/volnet/eval_ScreenVsWorld_GridNeRF.py
  • Figure 8, Fourier features: applications/volnet/eval_Fourier_Grid.py , includes the datasets not shown in the paper for space reasons
  • Figure 9,10, time-dependent fields: applications/volnet/eval_TimeVolumetricFeatures.py: train on every fifth timestep applications/volnet/eval_TimeVolumetricFeatures2.py: train on every second timestep applications/volnet/eval_TimeVolumetricFeatures_plotPaper.py: assembles the plot for Figure 9

The other eval_*.py scripts were cut from the paper due to space limitations. They equal the tests above, except that no grid was used and instead the largest possible networks fitting into the TC-architecture

Owner
Sebastian Weiss
Ph.D. student of computer science at the Technical University of Munich
Sebastian Weiss
Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad to your characters in Modo.

Applicator Kit for Modo Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad with a TrueDepth camera to

Andrew Buttigieg 3 Aug 24, 2021
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Transfer Learning for Pose Estimation of Illustrated Characters

bizarre-pose-estimator Transfer Learning for Pose Estimation of Illustrated Characters Shuhong Chen *, Matthias Zwicker * WACV2022 [arxiv] [video] [po

Shuhong Chen 142 Dec 28, 2022
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images

wrist-d PyTorch Implementation for Fracture Detection in Wrist Bone X-ray Images note: Paper: Under Review at MPDI Diagnostics Submission Date: Novemb

Fatih UYSAL 5 Oct 12, 2022
[CVPR 2022] Official code for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration"

MDCA Calibration This is the official PyTorch implementation for the paper: "A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved

MDCA Calibration 21 Dec 22, 2022
PyTorch code of my ICDAR 2021 paper Vision Transformer for Fast and Efficient Scene Text Recognition (ViTSTR)

Vision Transformer for Fast and Efficient Scene Text Recognition (ICDAR 2021) ViTSTR is a simple single-stage model that uses a pre-trained Vision Tra

Rowel Atienza 198 Dec 27, 2022
This is the latest version of the PULP SDK

PULP-SDK This is the latest version of the PULP SDK, which is under active development. The previous (now legacy) version, which is no longer supporte

78 Dec 07, 2022
The spiritual successor to knockknock for PyTorch Lightning, get notified when your training ends

Who's there? The spiritual successor to knockknock for PyTorch Lightning, to get a notification when your training is complete or when it crashes duri

twsl 70 Oct 06, 2022
STARCH compuets regional extreme storm physical characteristics and moisture balance based on spatiotemporal precipitation data from reanalysis or climate model data.

STARCH (Storm Tracking And Regional CHaracterization) STARCH computes regional extreme storm physical and moisture balance characteristics based on sp

Onosama 7 Oct 20, 2022
PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility

PatchMatch-RL: Deep MVS with Pixelwise Depth, Normal, and Visibility Jae Yong Lee, Joseph DeGol, Chuhang Zou, Derek Hoiem Installation To install nece

31 Apr 19, 2022
📚 Papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.

papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. Papermill lets you: parameterize notebooks execute notebooks This

nteract 5.1k Jan 03, 2023
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Fastquant - Backtest and optimize your trading strategies with only 3 lines of code!

fastquant 🤓 Bringing backtesting to the mainstream fastquant allows you to easily backtest investment strategies with as few as 3 lines of python cod

Lorenzo Ampil 1k Dec 29, 2022
Noise Conditional Score Networks (NeurIPS 2019, Oral)

Generative Modeling by Estimating Gradients of the Data Distribution This repo contains the official implementation for the NeurIPS 2019 paper Generat

451 Dec 26, 2022
This repository consists of Blender python scripts and corresponding assets to generate variants of the CANDLE dataset

candle-simulator This repository consists of Blender python scripts and corresponding assets to generate variants of the IITH-CANDLE dataset. The rend

1 Dec 15, 2021
FedML: A Research Library and Benchmark for Federated Machine Learning

FedML: A Research Library and Benchmark for Federated Machine Learning 📄 https://arxiv.org/abs/2007.13518 News 2021-02-01 (Award): #NeurIPS 2020# Fed

FedML-AI 2.3k Jan 08, 2023
Hybrid Neural Fusion for Full-frame Video Stabilization

FuSta: Hybrid Neural Fusion for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 430 Jan 04, 2023
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
Hierarchical Metadata-Aware Document Categorization under Weak Supervision (WSDM'21)

Hierarchical Metadata-Aware Document Categorization under Weak Supervision This project provides a weakly supervised framework for hierarchical metada

Yu Zhang 53 Sep 17, 2022