Training and Evaluation Code for Neural Volumes

Overview

Neural Volumes

This repository contains training and evaluation code for the paper Neural Volumes. The method learns a 3D volumetric representation of objects & scenes that can be rendered and animated from only calibrated multi-view video.

Neural Volumes

Citing Neural Volumes

If you use Neural Volumes in your research, please cite the paper:

@article{Lombardi:2019,
 author = {Stephen Lombardi and Tomas Simon and Jason Saragih and Gabriel Schwartz and Andreas Lehrmann and Yaser Sheikh},
 title = {Neural Volumes: Learning Dynamic Renderable Volumes from Images},
 journal = {ACM Trans. Graph.},
 issue_date = {July 2019},
 volume = {38},
 number = {4},
 month = jul,
 year = {2019},
 issn = {0730-0301},
 pages = {65:1--65:14},
 articleno = {65},
 numpages = {14},
 url = {http://doi.acm.org/10.1145/3306346.3323020},
 doi = {10.1145/3306346.3323020},
 acmid = {3323020},
 publisher = {ACM},
 address = {New York, NY, USA},
}

File Organization

The root directory contains several subdirectories and files:

data/ --- custom PyTorch Dataset classes for loading included data
eval/ --- utilities for evaluation
experiments/ --- location of input data and training and evaluation output
models/ --- PyTorch modules for Neural Volumes
render.py --- main evaluation script
train.py --- main training script

Requirements

  • Python (3.6+)
    • PyTorch (1.2+)
    • NumPy
    • Pillow
    • Matplotlib
  • ffmpeg (in PATH, needed to render videos)

How to Use

There are two main scripts in the root directory: train.py and render.py. The scripts take a configuration file for the experiment that defines the dataset used and the options for the model (e.g., the type of decoder that is used).

A sample set of input data is provided in the v0.1 release and can be downloaded here and extracted into the root directory of the repository. experiments/dryice1/data contains the input images and camera calibration data, and experiments/dryice1/experiment1 contains an example experiment configuration file (experiments/dryice1/experiment1/config.py).

To train the model:

python train.py experiments/dryice1/experiment1/config.py

To render a video of a trained model:

python render.py experiments/dryice1/experiment1/config.py Render

License

See the LICENSE file for details.

Comments
  • Training with our own data

    Training with our own data

    Hi,
    I have a few questions on how the data should be formatted and the data format of the provided dryice1.

    • The model expects world space coordinate in meters? i.e if my extrinsics are already in meters do I still need the world_scale=1/256. in config.py file?
    • The extrinsics are in world2cam and the rotation convention is like opencv? i.e, y-down,z-forward and x-right, assuming identity for pose.txt file?
    • how long do I need to train for about 200 frames? And in the config.py file it seems you are skipping some frames? This is ok to do for my own sequence as well?
    • in the KRT file, I see that there's 5 parameters above the RT matrix. This is the distortion correction in opencv format? But it is not used yes?
    • I did not visualize your cameras, so I am not sure how they are distributed. Is it gonna be a problem if I use 50 cameras equally distributed in a half-hemisphere and the subject is already at world origin and 3.5 meters from every cameras? My question is do I need to filter the training cameras so that the back side of subject that is not seen by input 3 cameras is excluded?
    • How do I choose the input cameras? I have a visualization of the cameras . Which camera config should I use? Is this more a question of which testing camera poses I intend to have, i.e narrower the testing cameras' range of view, the closer input training cameras can be? Config_0 is more orthogonal and Config_1 sees less of the backside.
    opened by zawlin 32
  • Some questions about coordination transformation

    Some questions about coordination transformation

    Hello, Thanks for releasing your code. I am impressed by your work. Now I hope to run your code with my our dataset. I have two questions.

    Firstly, I see the pose.txt is used in the code to put the objects in the center. If I use my own data, will the file still work?

    Secondly, I see the code set the raypos is among -1 and 1. Is it the matrix in this pose file that narrows the range to -1 to 1? My own dataset' range is different.

    Thirdly, does the code limit the scope of the template? Does it have to be between 0-255?

    Thanks a lot in advance!

    opened by maobenz 3
  • Location of the volume

    Location of the volume

    Hi there,

    I wonder whether the origin of the volume is (0,0,0)?

    I'm testing the method on a public dataset (http://people.csail.mit.edu/drdaniel/mesh_animation), and I know exactly where (0,0,0) is in the images. But the volume seems to float around the scene. This is the first preview for training process: prog_000001

    Each camera is pointing to the opposite side of the scene, so I expect the same for the volume location in images. But for some reason, they are on the same side in the images. Can you help?

    Thank you.

    opened by lochuynh1989 3
  • Any plan to release all data that presented in the paper?

    Any plan to release all data that presented in the paper?

    Hi @stephenlombardi ,

    Thanks for sharing this great work. I was wondering do you have any plan to release all the data that you used in the paper (apart from the dryice)?

    Best, Zirui

    opened by ziruiw-dev 2
  • Block-wise initialization scheme

    Block-wise initialization scheme

    Hi, is there any paper describing the used block-wise weight initialization scheme?

    https://github.com/facebookresearch/neuralvolumes/blob/8c5fad49b2b05b4b2e79917ee87299e7c1676d59/models/utils.py#L73

    opened by denkorzh 2
  • Is there a way to render a 3D file from this?

    Is there a way to render a 3D file from this?

    Hello, I was wondering if there is a way to export an .obj/,fbx file along with corresponding materials from this? If not, do you have any suggestions as to how to go about that if I were to try extend the code to incorporate that functionality?

    opened by arlorostirolla 1
  • How Can I train and render a Person Image

    How Can I train and render a Person Image

    Hi my name is Luan I am trying to render a Person Image but I am not being able to run can you create and for me a folder with the Setting setup to use a person image? Thank you.

    opened by LuanDalOrto 1
  • code for hybrid rendering (section 6.2) doesn't exist?

    code for hybrid rendering (section 6.2) doesn't exist?

    Hello,

    First of all, thank you for releasing the code for your seminal work. I really think neural volumes is one of the works that popularized differentiable rendering and inspired future works such as neural radiance fields.

    My question is whether this codebase includes the code for the hybrid rendering method outlined in section 6.2 of the paper. I'm trying to fit Neural Volumes to multi-view video of a full-body human being, similar to the 5th subfigure in Fig. 1 of the main paper, but after reading it more carefully it seems as though I would need to use hybrid rendering to be able to render the fine details of the human being.

    Could you

    1. confirm the existence of hybrid rendering in this codebase AND
    2. whether or not hybrid rendering was used to render the full-bodied human being in Fig. 1 of the main paper.

    Thank you in advance.

    opened by andrewsonga 1
  • Misaligned views in rendering

    Misaligned views in rendering

    Hi,

    I am working on MIT dataset to test the network. When I specify a camera to render, it looks fine throughout timeline. However, while rendering the rotating video, the cameras are misaligned as shown in attached screenshot. All cameras look like clustered at the center and views are spread around within the range cameras cover. Is it possible to be any error in KRT or configuration?

    Any suggestion is welcome. issue_MIT_5_cams

    opened by CorneliusHsiao 1
Releases(v0.1)
Owner
Meta Research
Meta Research
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
Official Keras Implementation for UNet++ in IEEE Transactions on Medical Imaging and DLMIA 2018

UNet++: A Nested U-Net Architecture for Medical Image Segmentation UNet++ is a new general purpose image segmentation architecture for more accurate i

Zongwei Zhou 1.8k Dec 27, 2022
Control-Robot-Arm-using-PS4-Controller - A Robotic Arm based on Raspberry Pi and Arduino that controlled by PS4 Controller

Control-Robot-Arm-using-PS4-Controller You can see all details about this Robot

MohammadReza Sharifi 5 Jan 01, 2022
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation.

Panoptic Segmentation of Satellite Image Time Series with Convolutional Temporal Attention Networks (ICCV 2021) This repository is the official implem

71 Jan 04, 2023
"Graph Neural Controlled Differential Equations for Traffic Forecasting", AAAI 2022

Graph Neural Controlled Differential Equations for Traffic Forecasting Setup Python environment for STG-NCDE Install python environment $ conda env cr

Jeongwhan Choi 55 Dec 28, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [BCNet, CVPR 2021] This is the official pytorch implementation of BCNet built on

Lei Ke 434 Dec 01, 2022
Pytorch Implementation of rpautrat/SuperPoint

SuperPoint-Pytorch (A Pure Pytorch Implementation) SuperPoint: Self-Supervised Interest Point Detection and Description Thanks This work is based on:

76 Dec 27, 2022
MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving

MultiSiam: Self-supervised Multi-instance Siamese Representation Learning for Autonomous Driving Code will be available soon. Motivation Architecture

Kai Chen 24 Apr 19, 2022
Reference implementation for Structured Prediction with Deep Value Networks

Deep Value Network (DVN) This code is a python reference implementation of DVNs introduced in Deep Value Networks Learn to Evaluate and Iteratively Re

Michael Gygli 55 Feb 02, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX

ONNX-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX Stereo depth estimation on the cone

Ibai Gorordo 23 Nov 29, 2022
Texture mapping with variational auto-encoders

vae-textures This is an experiment with using variational autoencoders (VAEs) to perform mesh parameterization. This was also my first project using J

Alex Nichol 41 May 24, 2022
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)

Rugby score prediction An end-to-end machine learning web app to predict rugby scores Overview An demo project to provide a high-level overview of the

34 May 24, 2022
PyTorch IPFS Dataset

PyTorch IPFS Dataset IPFSDataset(Dataset) See the jupyter notepad to see how it works and how it interacts with a standard pytorch DataLoader You need

Jake Kalstad 2 Apr 13, 2022
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 05, 2022
Neighborhood Contrastive Learning for Novel Class Discovery

Neighborhood Contrastive Learning for Novel Class Discovery This repository contains the official implementation of our paper: Neighborhood Contrastiv

Zhun Zhong 56 Dec 09, 2022
Plover-tapey-tape: an alternative to Plover’s built-in paper tape

plover-tapey-tape plover-tapey-tape is an alternative to Plover’s built-in paper

7 May 29, 2022
Awesome Remote Sensing Toolkit based on PaddlePaddle.

基于飞桨框架开发的高性能遥感图像处理开发套件,端到端地完成从训练到部署的全流程遥感深度学习应用。 最新动态 PaddleRS 即将发布alpha版本!欢迎大家试用 简介 PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处理平台,支持遥感图像分类,目标检测,图像分割,以及变化检测等常用遥

146 Dec 11, 2022