Skip to content

vincentfung13/MINE

Repository files navigation

MINE: Continuous-Depth MPI with Neural Radiance Fields

PyTorch implementation for our ICCV 2021 paper.

MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis
Jiaxin Li*1, Zijian Feng*1, Qi She1, Henghui Ding1, Changhu Wang1, Gim Hee Lee2
1ByteDance, 2National University of Singapore
*denotes equal contribution

Our MINE takes a single image as input and densely reconstructs the frustum of the camera, through which we can easily render novel views of the given scene:

ferngif

The overall architecture of our method:

Run training on the LLFF dataset:

Firstly, set up your conda environment:

conda env create -f environment.yml 
conda activate MINE

Download the pre-downsampled version of the LLFF dataset from Google Drive, unzip it and put it in the root of the project, then start training by running the following command:

sh start_training.sh MASTER_ADDR="localhost" MASTER_PORT=1234 N_NODES=1 GPUS_PER_NODE=2 NODE_RANK=0 WORKSPACE=/run/user/3861/vs_tmp DATASET=llff VERSION=debug EXTRA_CONFIG='{"training.gpus": "0,1"}'

You may find the tensorboard logs and checkpoints in the sub-working directory (WORKSPACE + VERSION).

Apart from the LLFF dataset, we experimented on the RealEstate10K, KITTI Raw and the Flowers Light Fields datasets - the data pre-processing codes and training flow for these datasets will be released later.

Running our pretrained models:

We release the pretrained models trained on the RealEstate10K, KITTI and the Flowers datasets:

Dataset N Input Resolution Download Link
RealEstate10K 32 384x256 Google Drive
RealEstate10K 64 384x256 Google Drive
KITTI 32 768x256 Google Drive
KITTI 64 768x256 Google Drive
Flowers 32 512x384 Google Drive
Flowers 64 512x384 Google Drive

To run the models, download the checkpoint and the hyper-parameter yaml file and place them in the same directory, then run the following script:

python3 visualizations/image_to_video.py --checkpoint_path MINE_realestate10k_384x256_monodepth2_N64/checkpoint.pth --gpus 0 --data_path visualizations/home.jpg --output_dir .

Citation

If you find our work helpful to your research, please cite our paper:

@inproceedings{mine2021,
  title={MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis},
  author={Jiaxin Li and Zijian Feng and Qi She and Henghui Ding and Changhu Wang and Gim Hee Lee},
  year={2021},
  booktitle={ICCV},
}

About

Code and models for our ICCV 2021 paper "MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published