Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Overview

Fast Training of Neural Lumigraph Representations using Meta Learning

Project Page | Paper | Data

Alexander W. Bergman, Petr Kellnhofer, Gordon Wetzstein, Stanford University.
Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.

Usage

To get started, create a conda environment with all dependencies:

conda env create -f environment.yml
conda activate metanlrpp

Code Structure

The code is organized as follows:

  • experiment_scripts: directory containing scripts to for training and testing MetaNLR++ models.
    • pretrain_features.py: pre-train encoder and decoder networks
    • train_sdf_ibr_meta.py: train meta-learned initialization for encoder, decoder, aggregation fn, and neural SDF
    • test_sdf_ibr_meta.py: specialize meta-learned initialization to a specific scene
    • train_sdf_ibr.py: train NLR++ model from scratch without meta-learned initialization
    • test_sdf_ibr.py: evaluate performance on withheld views
  • configs: directory containing configs to reproduce experiments in the paper
    • nlrpp_nlr.txt: configuration for training NLR++ on the NLR dataset
    • nlrpp_dtu.txt: configuration for training NLR++ on the DTU dataset
    • nlrpp_nlr_meta.txt: configuration for training the MetaNLR++ initialization on the NLR dataset
    • nlrpp_dtu_meta.txt: configuration for training the MetaNLR++ initialization on the DTU dataset
    • nlrpp_nlr_metaspec.txt: configuration for training MetaNLR++ on the NLR dataset using the learned initialization
    • nlrpp_dtu_metaspec.txt: configuration for training MetaNLR++ on the DTU dataset using the learned initialization
  • data_processing: directory containing utility functions for processing data
  • torchmeta: torchmeta library for meta-learning
  • utils: directory containing various utility functions for rendering and visualization
  • loss_functions.py: file containing loss functions for evaluation
  • meta_modules.py: contains meta learning wrappers around standard modules using torchmeta
  • modules.py: contains standard modules for coodinate-based networks
  • modules_sdf.py: extends standard modules for coordinate-based network representations of signed-distance functions.
  • modules_unet.py: contains encoder and decoder modules used for image-space feature processing
  • scheduler.py: utilities for training schedule
  • training.py: training script
  • sdf_rendering.py: functions for rendering SDF
  • sdf_meshing.py: functions for meshing SDF
  • checkpoints: contains checkpoints to some pre-trained models (additional/ablation models by request)
  • assets: contains paths to checkpoints which are used as assets, and pre-computed buffers over multiple runs (if necessary)

Getting Started

Pre-training Encoder and Decoder

Pre-train the encoder and decoder using the FlyingChairsV2 training dataset as follows:

python experiment_scripts/pretrain_features.py --experiment_name XXX --batch_size X --dataset_path /path/to/FlyingChairs2/train

Alternatively, use the checkpoint in the checkpoints directory.

Training NLR++

Train a NLR++ model using the following command:

python experiment_scripts/train_sdf_ibr.py --config_filepath configs/nlrpp_dtu.txt --experiment_name XXX --dataset_path /path/to/dtu/scanXXX --checkpoint_img_encoder /path/to/pretrained/encdec

Note that we have uploaded our processed version of the DTU data here, and the NLR data can be found here.

Meta-learned Initialization (MetaNLR++)

Meta-learn the initialization for the encoder, decoder, aggregation function, and neural SDF using the following command:

python experiment_scripts/train_sdf_ibr_meta.py --config_filepath configs/nlrpp_dtu_meta.txt --experiment_name XXX --dataset_path /path/to/dtu/meta/training --reference_view 24 --checkpoint_img_encoder /path/to/pretrained/encdec

Some optimized initializations for the DTU and NLR datasets can be found in the data directory. Additional models can be provided upon request.

Training MetaNLR++ from Initialization

Use the meta-learned initialization to specialize to a specific scene using the following command:

python experiment_scripts/test_sdf_ibr_meta.py --config_filepath configs/nlrpp_dtu_metaspec.txt --experiment_name XXX --dataset_path /path/to/dtu/scanXXX --reference_view 24 --meta_initialization /path/to/learned/meta/initialization

Evaluation

Test the converged scene on withheld views using the following command:

python experiment_scripts/test_sdf_ibr.py --config_filepath configs/nlrpp_dtu.txt --experiment_name XXX --dataset_path /path/to/dtu/scanXXX --checkpoint_path_test /path/to/checkpoint/to/evaluate

Citation & Contact

If you find our work useful in your research, please cite

@inproceedings{bergman2021metanlr,
author = {Bergman, Alexander W. and Kellnhofer, Petr and Wetzstein, Gordon},
title = {Fast Training of Neural Lumigraph Representations using Meta Learning},
booktitle = {NeurIPS},
year = {2021},
}

If you have any questions or would like access to specific ablations or baselines presented in the paper or supplement (the code presented here is only a subset based off of the source code used to generate the results), please feel free to contact the authors. Alex can be contacted via e-mail at [email protected].

Owner
Alex
Alex
Code for the paper "How Attentive are Graph Attention Networks?"

How Attentive are Graph Attention Networks? This repository is the official implementation of How Attentive are Graph Attention Networks?. The PyTorch

175 Dec 29, 2022
Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix Tuning Files: . ├── gpt2 # Code for GPT2 style autoregressive LM │ ├── train_e2e.py # high-level script

530 Jan 04, 2023
Semantic segmentation models, datasets and losses implemented in PyTorch.

Semantic Segmentation in PyTorch Semantic Segmentation in PyTorch Requirements Main Features Models Datasets Losses Learning rate schedulers Data augm

Yassine 1.3k Jan 07, 2023
Using Machine Learning to Create High-Res Fine Art

BIG.art: Using Machine Learning to Create High-Res Fine Art How to use GLIDE and BSRGAN to create ultra-high-resolution paintings with fine details By

Robert A. Gonsalves 13 Nov 27, 2022
Computational modelling of ray propagation through optical elements using the principles of geometric optics (Ray Tracer)

Computational modelling of ray propagation through optical elements using the principles of geometric optics (Ray Tracer) Introduction By applying the

Son Gyo Jung 1 Jul 09, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

Nikita 12 Dec 14, 2022
Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes

Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized C

Sam Bond-Taylor 139 Jan 04, 2023
An open source app to help calm you down when needed.

By: Seanpm2001, Et; Al. Top README.md Read this article in a different language Sorted by: A-Z Sorting options unavailable ( af Afrikaans Afrikaans |

Sean P. Myrick V19.1.7.2 2 Oct 24, 2022
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains This is an accompanying repository to the ICAIL 2021 pap

4 Dec 16, 2021
Spatial-Temporal Transformer for Dynamic Scene Graph Generation, ICCV2021

Spatial-Temporal Transformer for Dynamic Scene Graph Generation Pytorch Implementation of our paper Spatial-Temporal Transformer for Dynamic Scene Gra

Yuren Cong 119 Jan 01, 2023
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021)

UNITE and UNITE+ Unbalanced Feature Transport for Exemplar-based Image Translation (CVPR 2021) Unbalanced Intrinsic Feature Transport for Exemplar-bas

Fangneng Zhan 183 Nov 09, 2022
A Survey on Deep Learning Technique for Video Segmentation

A Survey on Deep Learning Technique for Video Segmentation A Survey on Deep Learning Technique for Video Segmentation Wenguan Wang, Tianfei Zhou, Fati

Tianfei Zhou 112 Dec 12, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
A simple baseline for 3d human pose estimation in tensorflow. Presented at ICCV 17.

3d-pose-baseline This is the code for the paper Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3

Julieta Martinez 1.3k Jan 03, 2023
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
EGNN - Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication. This

Phil Wang 259 Jan 04, 2023
code for our ECCV 2020 paper "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"

Code for our ECCV (2020) paper A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation. Prerequisites: python == 3.6.8 pytorch ==1.1.0

32 Nov 27, 2022