PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

Overview

PyNIF3D

License: MIT Read the Docs

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that allows for easy extension and combination of NIF-related components, as well as readily available paper implementations and dataset loaders.

As of August 2021, the following implementations are supported:

Installation

To get started with PyNIF3D, you can use pip to install a copy of this repository on your local machine or build the provided Dockerfile.

Local Installation

pip install --user "https://github.com/pfnet/pynif3d.git"

The following packages need to be installed in order to ensure the proper functioning of all the PyNIF3D features:

  • torch_scatter>=1.3.0
  • torchsearchsorted>=1.0

A script has been provided to take care of the installation steps for you. Please download it to a directory of choice and run:

bash post_install.bash

Docker Build

Enabling CUDA Support

Please make sure the following dependencies are installed in order to build the Docker image with CUDA support:

  • nvidia-docker
  • nvidia-container-runtime

Then register the nvidia runtime by adding the following to /etc/docker/daemon.json:

{
    "runtimes": {
        "nvidia": {
            [...]
        }
    },
    "default-runtime": "nvidia"
}

Restart the Docker daemon:

sudo systemctl restart docker

You should now be able to build a Docker image with CUDA support.

Building Dockerfile

git clone https://github.com/pfnet/pynif3d.git
cd pynif3d && nvidia-docker build -t pynif3d .

Running the Container

nvidia-docker run -it pynif3d bash

Tutorials

Get started with PyNIF3D using the examples provided below:

NeRF Tutorial CON Tutorial IDR Tutorial

In addition to the tutorials, pretrained models are also provided and ready to be used. Please consult this page for more information.

License

PyNIF3D is released under the MIT license. Please refer to this document for more information.

Contributing

We welcome any new contributions to PyNIF3D. Please make sure to read the contributing guidelines before submitting a pull request.

Documentation

Learn more about PyNIF3D by reading the API documentation.

Comments
  • [Question] The default train-run of CON caused Out-Of-Memory

    [Question] The default train-run of CON caused Out-Of-Memory

    (Not urgent question.)

    I run the training script in the example of CON with the default args (= grid mode) using ShapeNet (downloaded by occupancy_networks repo's script) using 32GB GPU. However, it caused OOM. When setting -bs 24, it works (memory usage 30622MiB / 32510MiB). Is this an intended behavior?

    $ python -u examples/con/train.py -dd /mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/occupancy_networks/data/ShapeNet -sd saved_models_grid
    Traceback (most recent call last):
      File "examples/con/train.py", line 218, in <module>
        main()
      File "examples/con/train.py", line 214, in main
        train(dataset, model, optimizer, args)
      File "examples/con/train.py", line 103, in train
        prediction = model(input_points, query_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/pipeline/con.py", line 99, in forward
        features = self.feature_encoder(input_points)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 275, in forward
        input_points, c, feature_grid=grid_id
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/local_pool_pointnet.py", line 191, in generate_coordinate_features
        fea_grid = self.feature_processing_fn(fea_grid)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 289, in forward
        x = layer(encoders_features[idx + 1], x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 172, in forward
        x = self.layer(x)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/mnt/nfs-mnj-hot-02/tmp/sosk/pynif3dcon/pynif3d/pynif3d/models/con/unet3d.py", line 82, in forward
        x = self.relu(self.convolution1(self.group_norm1(x)))
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/normalization.py", line 246, in forward
        input, self.num_groups, self.weight, self.bias, self.eps)
      File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 2112, in group_norm
        torch.backends.cudnn.enabled)
    RuntimeError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 31.75 GiB total capacity; 27.60 GiB already allocated; 2.92 GiB free; 27.72 GiB reserved in total by PyTorch)
    

    The env (at mnj) is here: (I run https://github.com/pytorch/pytorch/blob/master/torch/utils/collect_env.py)

    PyTorch version: 1.7.1
    Is debug build: False
    CUDA used to build PyTorch: 10.2
    ROCM used to build PyTorch: N/A
    
    OS: Ubuntu 18.04.5 LTS (x86_64)
    GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
    Clang version: Could not collect
    CMake version: version 3.10.2
    Libc version: glibc-2.10
    
    Python version: 3.7.4 (default, Aug 13 2019, 20:35:49)  [GCC 7.3.0] (64-bit runtime)
    Python platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
    Is CUDA available: True
    CUDA runtime version: 10.2.89
    GPU models and configuration:
    GPU 0: Tesla V100-SXM2-32GB
    GPU 1: Tesla V100-SXM2-32GB
    
    Nvidia driver version: 460.91.03
    cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
    HIP runtime version: N/A
    MIOpen runtime version: N/A
    
    Versions of relevant libraries:
    [pip3] numpy==1.20.1
    [pip3] pytorch-pfn-extras==0.3.2
    [pip3] torch==1.7.1
    [pip3] torchtext==0.8.1
    [pip3] torchvision==0.8.2
    [conda] blas                      1.0                         mkl
    [conda] cudatoolkit               10.2.89              hfd86e86_1
    [conda] mkl                       2020.2                      256
    [conda] mkl-service               2.3.0            py37he8ac12f_0
    [conda] mkl_fft                   1.3.0            py37h54f3939_0
    [conda] mkl_random                1.1.1            py37h0573a6f_0
    [conda] numpy                     1.19.2           py37h54aff64_0
    [conda] numpy-base                1.19.2           py37hfa32c7d_0
    [conda] pytorch                   1.7.1           py3.7_cuda10.2.89_cudnn7.6.5_0    pytorch
    [conda] pytorch3d                 0.4.0           py37_cu102_pyt171    pytorch3d
    [conda] torchvision               0.8.2                py37_cu102    pytorch
    
    question high priority 
    opened by soskek 3
  • Add badge for readthedocs.org

    Add badge for readthedocs.org

    Add a badge for displaying the status of the API documentation build.

    Tasks to be completed

    • [ ] Update README.md

    Definition of Done The badge correctly shows up on README.md

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add .readthedocs.yaml

    Add .readthedocs.yaml

    The API documentation successfully builds locally, but not when the project is imported into readthedocs.org.

    Tasks to be completed

    • [ ] Add .readthedocs.yaml

    Definition of Done The documentation successfully builds

    normal priority size-XS 
    opened by mihaimorariu 0
  • Remove post_install.bash

    Remove post_install.bash

    The installation procedure currently requires running the post_install.bash script in order to install torchsearchsorted and torch_scatter. These dependencies should be added to setup.py instead, allowing users to install PyNIF3D simply via pip install -e. The only reason why the post installation script exists is because PyNIF3D has not yet been tested with the newer versions of the two dependencies.

    Tasks to be completed

    • [ ] TODO

    Definition of Done A clear and concise description of the conditions for marking the issue as completed.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Add color jitter and on-the-fly loading to the DTU dataset loader (pixelNeRF)

    Implement the DTU dataset loader for the pixelNeRF paper.

    Tasks to be completed

    • [x] Implement the color jitter
    • [x] Implement the on-the-fly loading
    • [x] Review

    Definition of Done All unit tests are passing.

    normal priority size-XS 
    opened by mihaimorariu 0
  • Add pipeline for PixelNeRF

    Add pipeline for PixelNeRF

    Integrate all the components of PixelNeRF into the pipeline.

    Tasks to be completed

    • [ ] Implement PixelNeRF pipeline
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All unit tests are passing.

    feature normal priority size-M 
    opened by mihaimorariu 0
  • Pixel to camera conversion

    Pixel to camera conversion

    Add helper function for pixel to camera conversion.

    Tasks to be completed

    • [ ] Implement the helper function
    • [ ] Add unit tests
    • [ ] Review

    Definition of Done All the unit tests are passing.

    feature normal priority size-XS 
    opened by mihaimorariu 0
  • Add pixelNeRF to the repository

    Add pixelNeRF to the repository

    The pixelNeRF paper will be added to the repository: https://arxiv.org/abs/2012.02190

    Tasks to be completed

    • [x] Implement DTU dataset loader
    • [x] Implement the encoder
    • [x] Implement the NIF model
    • [x] Implement the renderer
    • [x] Implement the pipeline
    • [x] Implement the losses
    • [x] Write tutorial on how to use the code
    • [ ] Review

    Definition of Done

    • [x] The results are reproduced
    • [x] Training, evaluation scripts are provided
    • [x] Tutorial is provided
    feature normal priority size-L 
    opened by mihaimorariu 0
  • Support for multi-batch processing in torchsearchsorted

    Support for multi-batch processing in torchsearchsorted

    The implementation of torchsearchsorted that is currently being used does not support multi-batch processing. A for loop in currently being used in NeRF training for handling a batch size larger than one, but that significantly slows down the training process. This needs to be fixed.

    Tasks to be completed

    • [ ] TODO

    Definition of Done Training NeRF with batch size > 1 yields similar PSNR on the evaluation set after removing the for loop and replacing it with a multi-batch-based torchsearchsorted.

    feature low priority size-XS 
    opened by mihaimorariu 0
Releases(0.1)
  • 0.1(Aug 18, 2021)

    Initial version of PyNIF3D.

    Changelog:

    • Added a decoupled structure for NIF-based inference and training
      • Sampling functionalities (ray/pixel/feature)
      • NIF model renderering with generic chunking
      • Aggregation functionalities to generate final pixel/occupancy
    • Added dataset loaders:
      • LLFF
      • NeRF Blender
      • Deep Voxels
      • Shapes3D
      • DTU MVS
    • Added algorithm pipelines:
      • Convolutional Occupancy Networks (CON)
      • Neural Radiance Fields (NeRF)
      • Implicit Differentiable Renderer (IDR)
    • Added encoders:
      • Positional encoding
      • Fourier encoding
    • Added pre-trained models
    • Added generation of rays given camera matrix function
    • Added generic layer generation with bias and weight initializers
    • Added detailed logging structure through decorators
      • If the flag is set to DEBUG, the function inputs/outputs can be logged - this is expected to reduce the debugging duration
    • Added explanatory exceptions and exception messages
    • Added tutorials and sample scripts
    • Added unit tests
    • Added linter
    • Added Sphinx configuration support
    • Added Dockerfile and pip installation support
    • Added comprehensible documentation to each function
    • Added CI support
    Source code(tar.gz)
    Source code(zip)
Owner
Preferred Networks, Inc.
Preferred Networks, Inc.
PyTorch implementations of normalizing flow and its variants.

PyTorch implementations of normalizing flow and its variants.

Tatsuya Yatagawa 55 Dec 01, 2022
On the Variance of the Adaptive Learning Rate and Beyond

RAdam On the Variance of the Adaptive Learning Rate and Beyond We are in an early-release beta. Expect some adventures and rough edges. Table of Conte

Liyuan Liu 2.5k Dec 27, 2022
Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Fast and Easy-to-use Distributed Graph Learning for PyTorch Geometric

Quiver Team 221 Dec 22, 2022
A few Windows specific scripts for PyTorch

It is a repo that contains scripts that makes using PyTorch on Windows easier. Easy Installation Update: Starting from 0.4.0, you can go to the offici

408 Dec 15, 2022
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
A code copied from google-research which named motion-imitation was rewrited with PyTorch

motor-system Introduction A code copied from google-research which named motion-imitation was rewrited with PyTorch. More details can get from this pr

NewEra 6 Jan 08, 2022
Model summary in PyTorch similar to `model.summary()` in Keras

Keras style model.summary() in PyTorch Keras has a neat API to view the visualization of the model which is very helpful while debugging your network.

Shubham Chandel 3.7k Dec 29, 2022
Pytorch implementation of Distributed Proximal Policy Optimization

Pytorch-DPPO Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286 Using PPO with clip loss (from https

Alexis David Jacq 164 Jan 05, 2023
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
pip install antialiased-cnns to improve stability and accuracy

Antialiased CNNs [Project Page] [Paper] [Talk] Making Convolutional Networks Shift-Invariant Again Richard Zhang. In ICML, 2019. Quick & easy start Ru

Adobe, Inc. 1.6k Dec 28, 2022
High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

382 Dec 06, 2022
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

micrograd A tiny Autograd engine (with a bite! :)). Implements backpropagation (reverse-mode autodiff) over a dynamically built DAG and a small neural

Andrej 3.5k Jan 08, 2023
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021