Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

Overview

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters

[ Project | Paper | Official code base ] ⬅️ Thanks the original authors for the great work!

  • ⚠️ This is an unofficial pytorch re-implementation of the paper NeRF--: Neural Radiance Fields Without Known Camera Parameters.
  • I have reproduced the results on the LLFF-fern dataset, LLFF-flower dataset, personal photos, and some YouTube video clips chosen by myself.
  • This repo contains implementation of both the original paper and my personal modifications & refinements.

Example results

input raw images of the same scene (order doesn't matter, could be in arbitrary order)
output
(after joint optimization)
camera intrinsics (focal_x and focal_y)
camera extrinsics (inverse of poses, rotations and translations) of each image
a 3D implicit representation framework [NeRF] that models both appearance and geometry of the scene

Source 1: random YouTube video clips, time from 00:10:36 to 00:10:42

ReLU-based NeRF--
(no refinement, stuck in local minima)
SIREN-based NeRF--
(no refinement)
input
32 raw photos, sampled at 5 fps
32 x [540 x 960 x 3]
castle_input
learned scene model size 1.7 MiB / 158.7k params
8+ MLPs with width of 128
learned camera poses castle_1041_relu_pose castle_1041_pose_siren
predicted rgb
(appearance)
(with novel view synthesis)
castle_1041_relu castle_1041_siren
predicted depth
(geometry)
(with novel view synthesis)
castle_1041_relu_depth castle_1041_siren

Source 2: random YouTube video clips, time from 00:46:17 to 00:46:28

ReLU-based NeRF--
(with refinement, still stuck in local minima)
SIREN-based NeRF--
(with refinement)
input
27 raw photos, sampled at 2.5 fps
27 x [540 x 960 x 3]
castle_4614_input
learned scene model size 1.7 MiB / 158.7k params
8+ MLPs with width of 128
learned camera poses castle_4614_pose_siren castle_4614_pose_siren
predicted rgb
(appearance)
(with novel view synthesis)
castle_1041_siren castle_1041_siren
predicted depth
(geometry)
(with novel view synthesis)
castle_1041_siren castle_1041_siren

Source 3: photos by @crazyang

ReLU-based NeRF--
(no refinement)
SIREN-based NeRF--
(no refinement)
input
22 raw photos
22 x [756 x 1008 x3]
piano_input
learned scene model size 1.7 MiB / 158.7k params
8+ MLPs with width of 128
learned camera poses piano_relu_pose piano_siren_pose
predicted rgb
(appearance)
(with novel view synthesis)
piano_relu_rgb piano_siren_rgb
predicted depth
(geometry)
(with novel view synthesis)
piano_relu_depth piano_siren_depth

Notice that the reflectance of the piano's side is misunderstood as transmittance, which is reasonable and acceptable since no prior of the shape of the piano is provided.

What is NeRF and what is NeRF--

NeRF

NeRF is a neural (differentiable) rendering framework with great potentials. Please view [NeRF Project Page] for more details.

It represents scenes as a continuous function (typically modeled by several layers of MLP with non-linear activations); the same ideas within DeepSDF, SRN, DVR, and so on.

It is suggested to refer to [awesome-NeRF] and [awesome-neural-rendering] to catch up with recent 'exploding' development in these areas.

NeRF--

NeRF-- modifies the original NeRF from requiring known camera parameters to supporting unknown and learnable camera parameters.

  • NeRF-- does the following work in the training process:

    • Joint optimization of
      • camera intrinsics
      • camera extrinsics
      • a NeRF model (appearance and geometry)
    • Using pure raw real-world images and using just photometric loss (image reconstruction loss).
  • SfM+MVS

    • In other words, NeRF-- tackles exactly the same problem with what a basic SfM+MVS system like COLMAP does, but learns the camera parameters, geometry and appearance of the scene simultaneously in a more natural and holistic way, requiring no hand-crafted feature extraction procedures like SIFT or points, lines, surfaces etc.
  • How?

    • Since NeRF is a neural rendering framework (which means the whole framework is differentiable), one can directly compute the gradients of the photometric loss with respect to the camera parameters.
  • 🚀 Wide future of NeRF-based framework --- vision by inverse computer graphics

    • Expect more to come! Imagine direct computing of gradients of photometric loss w.r.t. illumination? object poses? object motion? object deformation? objects & background decomposition ? object relationships?...

My modifications & refinements / optional features

This repo first implements NeRF-- with nothing changed from the original paper. But it also support the following optional modifications, and will keep updating.

All the options are configured using yaml configuration files in the configs folder. See details about how to use these configs in the configuration section.

SIREN-based NeRF as backbone

Replace the ReLU activations of NeRF with sinusoidal(sin) activation. Codes borrowed and modified from [lucidrains' implementation of pi-GAN]. Please refer to SIREN and pi-GAN for more theoretical details.

To config:

model:
  framework: SirenNeRF # options: [NeRF, SirenNeRF]

📌 SIREN-based NeRF compared with ReLU-based NeRF

  • SirenNeRF could lead to smoother learned scenes (especially smoother shapes). (below, left for ReLU-based, right for SIREN-based)
ReLU-based NeRF-- (no refinement) SIREN-based NeRF-- (no refinement)
image-20210418015802348 depth_siren
  • SirenNeRF could lead to better results (smaller losses at convergence, better SSIM/PSNR metrics) .

siren_vs_relu_loss

The above two conclusions are also evidenced by the DeepSDF results shown in the SIREN project.

  • SirenNeRF is a little bad for scenes with lots of sharp and messy edges (for its continuously differentiable & smoothing nature).

e.g. LLFF-flower scene

ReLU-based NeRF-- (with refinement) SIREN-based NeRF-- (with refinement)
relu_rgb rgb_siren
relu_depth depth_siren

Note: since the raw output of SirenNeRF is relatively slower to grow, I multiply the raw output (sigma) of SirenNeRF with a factor of 10 30. To config, use model:siren_sigma_mul

[WIP] Perceptual model

For fewer shots with large viewport changes, I add an option to use a perceptual model (CLIP) and an additional perceptual loss along with the reconstruction loss, as in DietNeRF.

To config:

data:
  N_rays: -1 # options: -1 for whole image and no sampling, a integer > 0 for number of ray samples
training:
  w_perceptual: 0.01 # options: 0. for no perceptual model & loss, >0 to enable

Note: as the CLIP model requires at least 224x224 resolution and a whole image (not sampled rays) as input

  • data:N_rays must set to -1 for generating whole images when training
  • data:downscale must set to a proper value, and a GPU with larger memory size is required
    • or proper up-sampling is required

More choices of rotation representations / intrinsics parameterization

  • rotation representation

Refer to this paper for theoretical suggestions for different choices of SO(3) representations.

To config:

model:
  so3_representation: 'axis-angle' # options: [quaternion, axis-angle, rotation6D]
  • intrinsics parameterization

To config:

model:
  intrinsics_representation: 'square' # options: [square, ratio, exp]

Usage

hardware

  • 💻 OS: tested on Ubuntu 16 & 18
  • GPU (all assume 1024 ray samples(by default) + 756 x 1000 resolution + 128 network width)
    • scene model parameter size
      • 🌟 1.7 MiB for float32.
      • For NeRF scene model, it's just 8+ layers of MLPs with ReLU/sin activation, with width of 128.
    • 🕐 training time on 2080Ti
      • <10 mins or 0-200 epochs: learning poses mainly, and rough appearances
      • ~4 hours, from ~300 to 10000 epochs: the poses has little further changes; the NeRF model learns fine details (geometry & appearance) of the scene
    • GPU memory:
      • (training) ~3300 MiB GPU memory usage
      • (testing / rendering) lower GPU memory usage, but potentially more GPU usage since testing is on full resolution, while training is on a small batch of sampled pixel for each iteration.

software

  • Python >= 3.5

  • To install requirements, run:

    • Simply just run: (suggested used in anaconda environments):

      ## install torch & cuda & torchvision using your favorite tools, conda/pip
      # pip install torch torchvision
      
      ## install other requirements
      pip install numpy pyyaml addict imageio imageio-ffmpeg scikit-image tqdm tensorboardX "pytorch3d>=0.3.0" opencv-python
    • Or

      conda env create -f environment.yml
  • Before running any python scripts for the first time, cd to the project root directory and add the root project directory to the PYTHONPATH by running:

    cd /path/to/improved-nerfmm
    source set_env.sh

configuration

There are three choices for giving configuration values:

  • [DO NOT change] configs/base.yaml contains all the default values of the whole config.
  • [Your playground] Specific config yamls in configs folder are for specific tasks. You only need to put related config keys here. It is given to the python scripts using python xxx.py --config /path/to/xxx.yaml.
  • You can also give additional runtime command-line arguments with python xxx.py --xxx:yyy val, to change the config dict: 'xxx':{'yyy': val}

The configuration overwriting priority order:

  • command line args >>overwrites>> --config /path/to/xxx.yaml >>overwrites>> configs/base.yaml

data

dataset source link / script file path
LLFF Download LLFF example data using the scripts (run in project root directory):
bash dataio/download_example_data.sh
(automatic)
Youtube video clips https://www.youtube.com/watch?v=hWagaTjEa3Y ./data/castle_1041
./data/castle_4614
piano photos by @crazyang Google-drive ./data/piano

pre-trained models

You can get pre-trained models in either of the following two ways:

  • Clone the repo using git-lfs, and you will get the pre-trained models and configs in pretrained folder.
  • From pretrained folder in the google-drive

Training

Before running any python scripts for the first time, cd to the project root directory and add the root project directory to the PYTHONPATH by running:

cd /path/to/improved-nerfmm
source set_env.sh

Train on example data (without refinement)

Download LLFF example data using the scripts (run in project root directory)

bash dataio/download_example_data.sh

Start training:

python train.py --config configs/fern.yaml
  • To specify used GPUs: (e.g. 2 and 3) (in most cases, one GPU is quite enough.)

    python train.py --config configs/fern.yaml --device_ids 2,3
    • Currently, this repo use torch.DataParallel for multi-GPU training.
  • View the training logs and stats output in the experiment folder: ./logs/fern

  • Run tensorboard to monitor the training process:

    tensorboard --logdir logs/fern/events
  • To resume previously interrupted training:

    python train.py --load_dir logs/fern
    • Note: Full config is automatically backed up in the experiment directory when start training. Thus when loading from a directory, the scripts will only read from your_exp_dir/config.yaml, and configs/base.yaml will not be used.

🚀 Train on your own data

  • 📌 Note on suitable input

    • ①static scene ②forward-facing view ③with small view-port changes.
      • Smaller viewport change / forward facing views
        • So that a certain face of a certain object should appear in all views
        • Otherwise the training would fail in the early stages (failed to learn reasonable camera poses, and hence no chance for the NeRF).
        • This is mostly because it processes all input images at once.
      • No moving / deforming objects. (e.g. a car driving across the street, a bird flying across the sky, people waving hands)
      • No significant illumination/exposure changes. (e.g. camera moving from pointing towards the sun to back to the sun)
      • No focal length changes. Currently assume all input share the same camera intrinsics.
      • Just temporarily! (All imaginable limitations have imaginable solutions. Stay tuned!)
  • 📌 Note on training

    • When training with no refinement, the training process is roughly split into two phases:
      • [0 to about 100-300 epochs] The NeRF model learns some rough blurry pixel blocks, and these rough blocks helps with optimizing the camera extrinsics.
      • [300 epochs+ to end] The camera extrinsics are almost fixed, with very small further changes; the NeRF model learns the fine details of the scene.
    • You should monitor the early 100~300 epochs of the training process. If no meaningful camera poses (especially the camera translation on xy-plane) are learned during this early stages, there almost won't be any miracle further.
    • I have not tested on >50 images, but you can give it a try.
  • First, prepare your photos and put them into one separate folder, say /path/to/your_photos/xxx.png.

  • Second:

    • Write a new config file for your data: (you can put any config key mentioned in configs/base.yaml)

      expname: your_expname
      data:
        data_dir: /path/to/your_photos
        #=========
        N_rays: 1024 # numer of sampled rays in training.
        downscale: 4.
        #=========
    • And run

      python train.py --config /path/to/your_config.yaml
    • Or you can use some existing config file and run:

      python train.py --config /path/to/xxx.yaml --data:data_dir /path/to/your_photos --expname your_expname
  • The logging and stats would be in logs/your_expname folder.

  • Monitor the training process with:

    tensorboard --logdir logs/your_expname/events

Train on video clips

  • First, clip your video.mp4 with ffmepg.

    ffmpeg -ss 00:10:00 -i video.mp4 -to 00:00:05 -c copy video_clip.mp4
    • Note:

      • time format: hh:mm:ss.xxx
      • -ss means starting timestamp
      • -to means duration length, not end timestamp.
  • Second, convert video_clip.mp4 into images:

    mkdir output_dir
    ffmpeg -i video_clip.mp4 -filter:v fps=fps=3/1 output_dir/img-%04d.png
    • Note:

      • 3/1 means 3 frames per second. 3 is the nominator, 1 is the denominator.
  • Then train on your images with instructions in 🚀 ​Train on your own data

Automatic training with a pre-train stage and refine stage

Run

python train.py --config ./configs/fern_prefine.yaml

Or

python train.py --config ./configs/fern.yaml --training:num_epoch_pre 1000 --expname fern_prefine

You can also try on your own photos using similar configurations.

Refining a pre-trained NeRF--

This is the step suggested by original NeRF-- paper: drop all pre-trained parameters except for camera parameters, and refine.

For example, refine a pre-trained LLFF-fern scene, with original config stored in ./configs/fern.yaml, a pre-trained checkpoint in ./logs/fern/ckpts/final_xxxx.pt, and with a new experiment name fern_refine:

python train.py --config ./configs/fern.yaml --expname fern_refine --training:ckpt_file ./logs/fern/ckpts/final_xxxx.pt  --training:ckpt_only_use_keys cam_params

Note:

  • --training:ckpt_only_use_keys cam_params is used to drop all the keys in the pre-trained state_dict except cam_params when loading the checkpoints.
    • Some warnings like Could not find xxx in checkpoint will be prompted, which is OK and is the exact desired behavior.
  • a new expname is specified and hence a new experiment directory would be used, since we do not want to concatenate and mix the new logging stats with the old ones.

Testing

Free view port rendering

  • To render with camera2world matrices interpolated from the learned pose
python vis/free_viewport_rendering.py --load_dir /path/to/pretrained/exp_dir --render_type interpolate
  • To render with spiral camera paths as in the original NeRF repo
python vis/free_viewport_rendering.py --load_dir /path/to/pretrained/exp_dir --render_type spiral

Visualize learned camera pose

python vis/plot_camera_pose.py --load_dir /path/to/pretrained/exp_dir

Notice that the learned camera phi & t is actually for camera2world matrices, the inverse of camera extrinsics

You will get a matplotlib window like this:

image-20210425181218826

Road-map & updates

Basic NeRF model

  • 2021-04-17 Basic implementation of the original paper, including training & logging
    • Add quaternion, axis-angle, rotation6D as the rotation representation
    • Add exp, square, ratio for different parameterizations of camera focal_x and focal_y
    • Siren-ized NeRF
    • refinement process as in the original NeRF-- paper
  • Change DataParallel to DistributedDataParallel

Efficiency & training

  • 2021-04-19 Add pre-train for 1000 epochs then refine for 10000 epochs, similar with the official code base.

  • tolist - recent works in speeding up NeRF

More experiments

  • vSLAM tasks and datasets
  • traditional SfM datasets
  • 2021-04-15 raw videos handling

Better SfM strategy

  • tolist

More applicable for more scenes

  • NeRF++ & NeRF-- for handling unconstrained scenes

  • NeRF-W

  • some dynamic NeRF framework for dynamic scenes

  • Finish perceptual loss, for fewer shots

Related/used code bases

Citations

  • NeRF--
@article{wang2021nerf,
  title={Ne{RF}$--$: Neural Radiance Fields Without Known Camera Parameters},
  author={Wang, Zirui and Wu, Shangzhe and Xie, Weidi and Chen, Min and Prisacariu, Victor Adrian},
  journal={arXiv preprint arXiv:2102.07064},
  year={2021}
}
  • SIREN
@inproceedings{sitzmann2020siren,
  author={Sitzmann, Vincent and Martel, Julien NP and Bergman, Alexander W and Lindell, David B and Wetzstein, Gordon},
  title={Implicit neural representations with periodic activation functions},
  booktitle={Proc. NeurIPS},
  year={2020}
}
  • Perceptual model / semantic consistency from DietNeRF
@article{jain2021dietnerf,
  title={Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis},
  author={Ajay Jain and Matthew Tancik and Pieter Abbeel},
  journal={arXiv},
  year={2021}
}
Owner
Jianfei Guo
Thrive, don't just exist.
Jianfei Guo
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation (CoRL 2021)

Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation [Project website] [Paper] This project is a PyTorch i

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 6 Feb 28, 2022
custom pytorch implementation of MoCo v3

MoCov3-pytorch custom implementation of MoCov3 [arxiv]. I made minor modifications based on the official MoCo repository [github]. No ViT part code an

39 Nov 14, 2022
Repositorio oficial del curso IIC2233 Programación Avanzada 🚀✨

IIC2233 - Programación Avanzada Evaluación Las evaluaciones serán efectuadas por medio de actividades prácticas en clases y tareas. Se calculará la no

IIC2233 @ UC 47 Sep 06, 2022
Object detection GUI based on PaddleDetection

PP-Tracking GUI界面测试版 本项目是基于飞桨开源的实时跟踪系统PP-Tracking开发的可视化界面 在PaddlePaddle中加入pyqt进行GUI页面研发,可使得整个训练过程可视化,并通过GUI界面进行调参,模型预测,视频输出等,通过多种类型的识别,简化整体预测流程。 GUI界面

杨毓栋 68 Jan 02, 2023
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

1 Nov 01, 2021
Toward Multimodal Image-to-Image Translation

BicycleGAN Project Page | Paper | Video Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our

Jun-Yan Zhu 1.4k Dec 22, 2022
Gin provides a lightweight configuration framework for Python

Gin Config Authors: Dan Holtmann-Rice, Sergio Guadarrama, Nathan Silberman Contributors: Oscar Ramirez, Marek Fiser Gin provides a lightweight configu

Google 1.7k Jan 03, 2023
Pytorch implementation of AREL

Status: Archive (code is provided as-is, no updates expected) Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement

8 Nov 25, 2022
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.

Conceptual 12M We introduce the Conceptual 12M (CC12M), a dataset with ~12 million image-text pairs meant to be used for vision-and-language pre-train

Google Research Datasets 226 Dec 07, 2022
adversarial_multi_armed_bandit_variable_plays

Adversarial Multi-Armed Bandit with Variable Plays This code is for paper: Adversarial Online Learning with Variable Plays in the Evasion-and-Pursuit

Yiyang Wang 1 Oct 28, 2021
PyTorch implementation for paper StARformer: Transformer with State-Action-Reward Representations.

StARformer This repository contains the PyTorch implementation for our paper titled StARformer: Transformer with State-Action-Reward Representations.

Jinghuan Shang 14 Dec 09, 2022
Official PyTorch implementation of "AASIST: Audio Anti-Spoofing using Integrated Spectro-Temporal Graph Attention Networks"

AASIST This repository provides the overall framework for training and evaluating audio anti-spoofing systems proposed in 'AASIST: Audio Anti-Spoofing

Clova AI Research 56 Jan 02, 2023
An AI Assistant More Than a Toolkit

tymon An AI Assistant More Than a Toolkit The reason for creating framework tymon is simple. making AI more like an assistant, helping us to complete

TymonXie 46 Oct 24, 2022
(to be released) [NeurIPS'21] Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs

Higher-Order Transformers Kim J, Oh S, Hong S, Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs, NeurIPS 2021. [arxiv] W

Jinwoo Kim 44 Dec 28, 2022
Official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).

Knowledge Bridging for Empathetic Dialogue Generation This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generat

Qintong Li 50 Dec 20, 2022
PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

PocketNet This is the official repository of the paper: PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and M

Fadi Boutros 40 Dec 22, 2022
Fully Convolutional Networks for Semantic Segmentation by Jonathan Long*, Evan Shelhamer*, and Trevor Darrell. CVPR 2015 and PAMI 2016.

Fully Convolutional Networks for Semantic Segmentation This is the reference implementation of the models and code for the fully convolutional network

Evan Shelhamer 3.2k Jan 08, 2023
MTCNN face detection implementation for TensorFlow, as a PIP package.

MTCNN Implementation of the MTCNN face detector for Keras in Python3.4+. It is written from scratch, using as a reference the implementation of MTCNN

Iván de Paz Centeno 1.9k Dec 30, 2022
Technical Analysis library in pandas for backtesting algotrading and quantitative analysis

bta-lib - A pandas based Technical Analysis Library bta-lib is pandas based technical analysis library and part of the backtrader family. Links Main P

DRo 393 Dec 20, 2022
Memory efficient transducer loss computation

Introduction This project implements the optimization techniques proposed in Improving RNN Transducer Modeling for End-to-End Speech Recognition to re

Fangjun Kuang 51 Nov 25, 2022