PlenOctrees: NeRF-SH Training & Conversion

Overview

PlenOctrees Official Repo: NeRF-SH training and conversion

This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting part of the code release for:

PlenOctrees for Real Time Rendering of Neural Radiance Fields
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa

https://alexyu.net/plenoctrees

Please see the following repository for our C++ PlenOctrees volume renderer: https://github.com/sxyu/volrend

Setup

Please use conda for a replicable environment.

conda env create -f environment.yml
conda activate plenoctree
pip install --upgrade pip

Or you can install the dependencies manually by:

conda install pytorch torchvision cudatoolkit=11.0 -c pytorch
conda install tqdm
pip install -r requirements.txt

[Optional] Install GPU and TPU support for Jax. This is useful for NeRF-SH training. Remember to change cuda110 to your CUDA version, e.g. cuda102 for CUDA 10.2.

pip install --upgrade jax jaxlib==0.1.65+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html

NeRF-SH Training

We release our trained NeRF-SH models as well as converted plenoctrees at Google Drive. You can also use the following commands to reproduce the NeRF-SH models.

Training and evaluation on the NeRF-Synthetic dataset (Google Drive):

export DATA_ROOT=./data/NeRF/nerf_synthetic/
export CKPT_ROOT=./data/Plenoctree/checkpoints/syn_sh16/
export SCENE=chair
export CONFIG_FILE=nerf_sh/config/blender

python -m nerf_sh.train \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

python -m nerf_sh.eval \
    --chunk 4096 \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

Note for SCENE=mic, we adopt a warmup learning rate schedule (--lr_delay_steps 50000 --lr_delay_mult 0.01) to avoid unstable initialization.

Training and evaluation on TanksAndTemple dataset (Download Link) from the NSVF paper:

export DATA_ROOT=./data/TanksAndTemple/
export CKPT_ROOT=./data/Plenoctree/checkpoints/tt_sh25/
export SCENE=Barn
export CONFIG_FILE=nerf_sh/config/tt

python -m nerf_sh.train \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

python -m nerf_sh.eval \
    --chunk 4096 \
    --train_dir $CKPT_ROOT/$SCENE/ \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

PlenOctrees Conversion and Optimization

Before converting the NeRF-SH models into plenoctrees, you should already have the NeRF-SH models trained/downloaded and placed at ./data/PlenOctree/checkpoints/{syn_sh16, tt_sh25}/. Also make sure you have the training data placed at ./data/{NeRF/nerf_synthetic, TanksAndTemple}.

To reproduce our results in the paper, you can simplly run:

# NeRF-Synthetic dataset
python -m octree.task_manager octree/config/syn_sh16.json --gpus="0 1 2 3"

# TanksAndTemple dataset
python -m octree.task_manager octree/config/tt_sh25.json --gpus="0 1 2 3"

The above command will parallel all scenes in the dataset across the gpus you set. The json files contain dedicated hyper-parameters towards better performance (PSNR, SSIM, LPIPS). So in this setting, a 24GB GPU is needed for each scene and in averange the process takes about 15 minutes to finish. The converted plenoctree will be saved to ./data/PlenOctree/checkpoints/{syn_sh16, tt_sh25}/$SCENE/octrees/.

Below is a more straight-forward script for demonstration purpose:

export DATA_ROOT=./data/NeRF/nerf_synthetic/
export CKPT_ROOT=./data/PlenOctree/checkpoints/syn_sh16
export SCENE=chair
export CONFIG_FILE=nerf_sh/config/blender

python -m octree.extraction \
    --train_dir $CKPT_ROOT/$SCENE/ --is_jaxnerf_ckpt \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree.npz

python -m octree.optimization \
    --input $CKPT_ROOT/$SCENE/tree.npz \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree_opt.npz

python -m octree.evaluation \
    --input $CKPT_ROOT/$SCENE/octrees/tree_opt.npz \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/

# [Optional] Only used for in-browser viewing.
python -m octree.compression \
    $CKPT_ROOT/$SCENE/octrees/tree_opt.npz \
    --out_dir $CKPT_ROOT/$SCENE/ \
    --overwrite

MISC

Project Vanilla NeRF to PlenOctree

A vanilla trained NeRF can also be converted to a plenoctree for fast inference. To mimic the view-independency propertity as in a NeRF-SH model, we project the vanilla NeRF model to SH basis functions by sampling view directions for every points in the space. Though this makes converting vanilla NeRF to a plenoctree possible, the projection process inevitability loses the quality of the model, even with a large amount of sampling view directions (which takes hours to finish). So we recommend to just directly train a NeRF-SH model end-to-end.

Below is a example of projecting a trained vanilla NeRF model from JaxNeRF repo (Download Link) to a plenoctree. After extraction, you can optimize & evaluate & compress the plenoctree just like usual:

export DATA_ROOT=./data/NeRF/nerf_synthetic/ 
export CKPT_ROOT=./data/JaxNeRF/jaxnerf_models/blender/ 
export SCENE=drums
export CONFIG_FILE=nerf_sh/config/misc/proj

python -m octree.extraction \
    --train_dir $CKPT_ROOT/$SCENE/ --is_jaxnerf_ckpt \
    --config $CONFIG_FILE \
    --data_dir $DATA_ROOT/$SCENE/ \
    --output $CKPT_ROOT/$SCENE/octrees/tree.npz \
    --projection_samples 100 \
    --radius 1.3

Note --projection_samples controls how many sampling view directions are used. More sampling view directions give better projection quality but takes longer time to finish. For example, for the drums scene in the NeRF-Synthetic dataset, 100 / 10000 sampling view directions takes about 2 mins / 2 hours to finish the plenoctree extraction. It produce raw plenoctrees with PSNR=22.49 / 23.84 (before optimization). Note that extraction from a NeRF-SH model produce a raw plenoctree with PSNR=25.01.

Owner
Alex Yu
Undergrad at UC Berkeley
Alex Yu
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
Chinese clinical named entity recognition using pre-trained BERT model

Chinese clinical named entity recognition (CNER) using pre-trained BERT model Introduction Code for paper Chinese clinical named entity recognition wi

Xiangyang Li 109 Dec 14, 2022
My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control

yobi byte 29 Oct 09, 2022
Code for Paper Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning

Predicting Osteoarthritis Progression via Unsupervised Adversarial Representation Learning (c) Tianyu Han and Daniel Truhn, RWTH Aachen University, 20

Tianyu Han 7 Nov 22, 2022
Yas CRNN model training - Yet Another Genshin Impact Scanner

Yas-Train Yet Another Genshin Impact Scanner 又一个原神圣遗物导出器 介绍 该仓库为 Yas 的模型训练程序 相关资料 MobileNetV3 CRNN 使用 假设你会设置基本的pytorch环境。 生成数据集 python main.py gen 训练

wormtql 18 Jan 08, 2023
pytorch implementation for Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network arXiv:1609.04802

PyTorch SRResNet Implementation of Paper: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"(https://arxiv.org/abs

Jiu XU 436 Jan 09, 2023
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
Pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks."

alpha-GAN Unofficial pytorch implementation of Rosca, Mihaela, et al. "Variational Approaches for Auto-Encoding Generative Adversarial Networks." arXi

Victor Shepardson 78 Dec 08, 2022
Pytorch implementation for A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose Paper | Website | Data A-NeRF: Articulated Neural Radiance F

Shih-Yang Su 172 Dec 22, 2022
CTF challenges from redpwnCTF 2021

redpwnCTF 2021 Challenges This repository contains challenges from redpwnCTF 2021 in the rCDS format; challenge information is in the challenge.yaml f

redpwn 27 Dec 07, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
[内测中]前向式Python环境快捷封装工具,快速将Python打包为EXE并添加CUDA、NoAVX等支持。

QPT - Quick packaging tool 快捷封装工具 GitHub主页 | Gitee主页 QPT是一款可以“模拟”开发环境的多功能封装工具,最短只需一行命令即可将普通的Python脚本打包成EXE可执行程序,并选择性添加CUDA和NoAVX的支持,尽可能兼容更多的用户环境。 感觉还可

QPT Family 545 Dec 28, 2022
i3DMM: Deep Implicit 3D Morphable Model of Human Heads

i3DMM: Deep Implicit 3D Morphable Model of Human Heads CVPR 2021 (Oral) Arxiv | Poject Page This project is the official implementation our work, i3DM

Tarun Yenamandra 60 Jan 03, 2023
Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Simple PyTorch implementations of Badnets on MNIST and CIFAR10.

Vera 75 Dec 13, 2022
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
Template repository for managing machine learning research projects built with PyTorch-Lightning

Tutorial Repository with a minimal example for showing how to deploy training across various compute infrastructure.

Sidd Karamcheti 3 Feb 11, 2022
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration Stefan Abi-Karam*, Yuqi He*, Rishov Sarkar*, Lakshmi Sathidevi, Zihang Qiao, Co

Sharc-Lab 19 Dec 15, 2022
Use tensorflow to implement a Deep Neural Network for real time lane detection

LaneNet-Lane-Detection Use tensorflow to implement a Deep Neural Network for real time lane detection mainly based on the IEEE IV conference paper "To

MaybeShewill-CV 1.9k Jan 08, 2023