Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Related tags

Deep Learningkilonerf
Overview

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

Check out the paper on arXiv: https://arxiv.org/abs/2103.13744

KiloNeRF interactive demo

This repo contains the code for KiloNeRF, together with instructions on how to download pretrained models and datasets. Additionally, we provide a viewer for interactive visualization of KiloNeRF scenes. We further improved the implementation and KiloNeRF now runs ~5 times faster than the numbers we report in the first arXiv version of the paper. As a consequence the Lego scene can now be rendered at around 50 FPS.

Prerequisites

  • OS: Ubuntu 20.04.2 LTS
  • GPU: >= NVIDIA GTX 1080 Ti with >= 460.73.01 driver
  • Python package manager conda

Setup

Open a terminal in the root directory of this repo and execute export KILONERF_HOME=$PWD

Install OpenGL and GLUT development files
sudo apt install libgl-dev freeglut3-dev

Install Python packages
conda env create -f $KILONERF_HOME/environment.yml

Activate kilonerf environment
source activate kilonerf

CUDA extension installation

You can either install our pre-compiled CUDA extension or compile the extension yourself. Only compiling it yourself will allow you to make changes to the CUDA code but is more tedious.

Option A: Install pre-compiled CUDA extension

Install pre-compiled CUDA extension
pip install $KILONERF_HOME/cuda/dist/kilonerf_cuda-0.0.0-cp38-cp38-linux_x86_64.whl

Option B: Build CUDA extension yourself

Install CUDA development kit and restart your bash:

wget https://developer.download.nvidia.com/compute/cuda/11.1.1/local_installers/cuda_11.1.1_455.32.00_linux.run
sudo sh cuda_11.1.1_455.32.00_linux.run
echo -e "\nexport PATH=\"/usr/local/cuda/bin:\$PATH\"" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=\"/usr/local/cuda/lib64:\$LD_LIBRARY_PATH\"" >> ~/.bashrc

Download magma from http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz then build and install to /usr/local/magma

sudo apt install gfortran libopenblas-dev
wget http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.5.4.tar.gz
tar -zxvf magma-2.5.4.tar.gz
cd magma-2.5.4
cp make.inc-examples/make.inc.openblas make.inc
export GPU_TARGET="Maxwell Pascal Volta Turing Ampere"
export CUDADIR=/usr/local/cuda
export OPENBLASDIR="/usr"
make
sudo -E make install prefix=/usr/local/magma

For further information on installing magma see: http://icl.cs.utk.edu/projectsfiles/magma/doxygen/installing.html

Finally compile KiloNeRF's C++/CUDA code

cd $KILONERF_HOME/cuda
python setup.py develop

Download pretrained models

We provide pretrained KiloNeRF models for the following scenes: Synthetic_NeRF_Chair, Synthetic_NeRF_Lego, Synthetic_NeRF_Ship, Synthetic_NSVF_Palace, Synthetic_NSVF_Robot

cd $KILONERF_HOME
mkdir logs
cd logs
wget https://www.dropbox.com/s/eqvf3x23qbubr9p/kilonerf-pretrained.tar.gz?dl=1 --output-document=paper.tar.gz
tar -xf paper.tar.gz

Download NSVF datasets

Credit to NSVF authors for providing their datasets: https://github.com/facebookresearch/NSVF

cd $KILONERF_HOME/data/nsvf
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NSVF.zip && unzip -n Synthetic_NSVF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/Synthetic_NeRF.zip && unzip -n Synthetic_NeRF.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/BlendedMVS.zip && unzip -n BlendedMVS.zip
wget https://dl.fbaipublicfiles.com/nsvf/dataset/TanksAndTemple.zip && unzip -n TanksAndTemple.zip

Since we slightly adjusted the bounding boxes for some scenes, it is important that you use the provided unzip argument to avoid overwriting our bounding boxes.

Usage

To benchmark a trained model run:
bash benchmark.sh

You can launch the interactive viewer by running:
bash render_to_screen.sh

To train a model yourself run
bash train.sh

The default dataset is Synthetic_NeRF_Lego, you can adjust the dataset by setting the dataset variable in the respective script.

Owner
Christian Reiser
Christian Reiser
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger 🍔 Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.

VCN: Volumetric correspondence networks for optical flow [project website] Requirements python 3.6 pytorch 1.1.0-1.3.0 pytorch correlation module (opt

Gengshan Yang 144 Dec 06, 2022
This is a collection of our NAS and Vision Transformer work.

This is a collection of our NAS and Vision Transformer work.

Microsoft 828 Dec 28, 2022
[RSS 2021] An End-to-End Differentiable Framework for Contact-Aware Robot Design

DiffHand This repository contains the implementation for the paper An End-to-End Differentiable Framework for Contact-Aware Robot Design (RSS 2021). I

Jie Xu 60 Jan 04, 2023
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Jaxtorch (a jax nn library)

Jaxtorch (a jax nn library) This is my jax based nn library. I created this because I was annoyed by the complexity and 'magic'-ness of the popular ja

nshepperd 17 Dec 08, 2022
Official tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”

Tensorflow implementation for CVPR2020 paper “Learning to Cartoonize Using White-box Cartoon Representations”.

3.7k Dec 31, 2022
Multiwavelets-based operator model

Multiwavelet model for Operator maps Gaurav Gupta, Xiongye Xiao, and Paul Bogdan Multiwavelet-based Operator Learning for Differential Equations In Ne

Gaurav 33 Dec 04, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
Face Recognition & AI Based Smart Attendance Monitoring System.

In today’s generation, authentication is one of the biggest problems in our society. So, one of the most known techniques used for authentication is h

Sagar Saha 1 Jan 14, 2022
4st place solution for the PBVS 2022 Multi-modal Aerial View Object Classification Challenge - Track 1 (SAR) at PBVS2022

A Two-Stage Shake-Shake Network for Long-tailed Recognition of SAR Aerial View Objects 4st place solution for the PBVS 2022 Multi-modal Aerial View Ob

LinpengPan 5 Nov 09, 2022
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data

Real-ESRGAN Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data Ported from https://github.com/xinntao/Real-ESRGAN Depend

Holy Wu 44 Dec 27, 2022
Code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language"

The repository provides the source code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language" submitted to HA

Sherzod Hakimov 3 Aug 04, 2022
Official PyTorch Implementation of Learning Architectures for Binary Networks

Learning Architectures for Binary Networks An Pytorch Implementation of the paper Learning Architectures for Binary Networks (BNAS) (ECCV 2020) If you

Computer Vision Lab. @ GIST 25 Jun 09, 2022
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
Distance correlation and related E-statistics in Python

dcor dcor: distance correlation and related E-statistics in Python. E-statistics are functions of distances between statistical observations in metric

Carlos Ramos Carreño 108 Dec 27, 2022
Implementations of LSTM: A Search Space Odyssey variants and their training results on the PTB dataset.

An LSTM Odyssey Code for training variants of "LSTM: A Search Space Odyssey" on Fomoro. Check out the blog post. Training Install TensorFlow. Clone th

Fomoro AI 95 Apr 13, 2022