A set of tools to pre-calibrate and calibrate (multi-focus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

Overview

banner-logo


COMPOTE: Calibration Of Multi-focus PlenOpTic camEra.

COMPOTE is a set of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras (e.g., a Raytrix R12) based on the libpleno.

Quick Start

Pre-requisites

The COMPOTE applications have a light dependency list:

  • boost version 1.54 and up, portable C++ source libraries,
  • libpleno, an open-souce C++ library for plenoptic camera,

and was compiled and tested on:

  • Ubuntu 18.04.4 LTS, GCC 7.5.0, with Eigen 3.3.4, Boost 1.65.1, and OpenCV 3.2.0.

Compilation & Test

If you are comfortable with Linux and CMake and have already installed the prerequisites above, the following commands should compile the applications on your system.

mkdir build && cd build
cmake ..
make -j6

To test the calibrate application you can use the example script from the build directory:

./../example/run_calibration.sh

Applications

Configuration

All applications use .js (json) configuration file. The path to this configuration files are given in the command line using boost program options interface.

Options:

short long default description
-h --help Print help messages
-g --gui true Enable GUI (image viewers, etc.)
-v --verbose true Enable output with extra information
-l --level ALL (15) Select level of output to print (can be combined): NONE=0, ERR=1, WARN=2, INFO=4, DEBUG=8, ALL=15
-i --pimages Path to images configuration file
-c --pcamera Path to camera configuration file
-p --pparams "internals.js" Path to camera internal parameters configuration file
-s --pscene Path to scene configuration file
-f --features "observations.bin.gz" Path to observations file
-e --extrinsics "extrinsics.js" Path to save extrinsics parameters file
-o --output "intrinsics.js" Path to save intrinsics parameters file

For instance to run calibration:

./calibrate -i images.js -c camera.js -p params.js -f observations.bin.gz -s scene.js -g true -l 7

Configuration file examples are given for the dataset R12-A in the folder examples/.

Pre-calibration

precalibrate uses whites raw images taken at different aperture to calibrate the Micro-Images Array (MIA) and computes the internal parameters used to initialize the camera and to detect the Blur Aware Plenoptic (BAP) features.

Requirements: minimal camera configuration, white images. Output: radii statistics (.csv), internal parameters, initial camera parameters.

Features Detection

detect extracts the newly introduced Blur Aware Plenoptic (BAP) features in checkerboard images.

Requirements: calibrated MIA, internal parameters, checkerboard images, and scene configuration. Output: micro-image centers and BAP features.

Camera Calibration

calibrate runs the calibration of the plenoptic camera (set I=0 to act as pinholes array, or I>0 for multifocus case). It generates the intrinsics and extrinsics parameters.

Requirements: calibrated MIA, internal parameters, features and scene configuration. If none are given all steps are re-done. Output: error statistics, calibrated camera parameters, camera poses.

Extrinsics Estimation & Calibration Evaluation

extrinsics runs the optimization of extrinsics parameters given a calibrated camera and generates the poses.

Requirements: internal parameters, features, calibrated camera and scene configuration. Output: error statistics, estimated poses.

COMPOTE also provides two applications to run stats evaluation on the optimized poses optained with a constant step linear translation along the z-axis:

  • linear_evaluation gives the absolute errors (mean + std) and the relative errors (mean + std) of translation of the optimized poses,
  • linear_raytrix_evaluation takes .xyz pointcloud obtained by Raytrix calibration software and gives the absolute errors (mean + std) and the relative errors (mean + std) of translation.

Note: those apps are legacy and have been moved and generalized in the [BLADE] app's evaluate.

Blur Proportionality Coefficient Calibration

blurcalib runs the calibration of the blur proportionality coefficient kappa linking the spread parameter of the PSF with the blur radius. It updates the internal parameters with the optimized value of kappa.

Requirements: internal parameters, features and images. Output: internal parameters.

Datasets

Datasets R12-A, R12-B and R12-C can be downloaded from here. The dataset R12-D, and the simulated unfocused plenoptic camera dataset UPC-S are also available from here.

Citing

If you use COMPOTE or libpleno in an academic context, please cite the following publication:

@inproceedings{labussiere2020blur,
  title 	=	{Blur Aware Calibration of Multi-Focus Plenoptic Camera},
  author	=	{Labussi{\`e}re, Mathieu and Teuli{\`e}re, C{\'e}line and Bernardin, Fr{\'e}d{\'e}ric and Ait-Aider, Omar},
  booktitle	=	{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages		=	{2545--2554},
  year		=	{2020}
}

License

COMPOTE is licensed under the GNU General Public License v3.0. Enjoy!


Owner
ComSEE - Computers that SEE
Computer Vision research team of the Image, Systems of Perception and Robotics (ISPR) department of the Institut Pascal.
ComSEE - Computers that SEE
Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

Auto-ViML Automatically Build Variant Interpretable ML models fast! Auto_ViML is pronounced "auto vimal" (autovimal logo created by Sanket Ghanmare) N

AutoViz and Auto_ViML 397 Dec 30, 2022
GT China coal model

GT China coal model The full version of a China coal transport model with a very high spatial reslution. What it does The code works in a few steps: T

0 Dec 13, 2021
Official implementation of "Refiner: Refining Self-attention for Vision Transformers".

RefinerViT This repo is the official implementation of "Refiner: Refining Self-attention for Vision Transformers". The repo is build on top of timm an

101 Dec 29, 2022
A PyTorch implementation of " EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks."

EfficientNet A PyTorch implementation of EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. [arxiv] [Official TF Repo] Implemen

AhnDW 298 Dec 10, 2022
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

4 Feb 13, 2022
Implementation of "Selection via Proxy: Efficient Data Selection for Deep Learning" from ICLR 2020.

Selection via Proxy: Efficient Data Selection for Deep Learning This repository contains a refactored implementation of "Selection via Proxy: Efficien

Stanford Future Data Systems 70 Nov 16, 2022
Applying PVT to Semantic Segmentation

Applying PVT to Semantic Segmentation Here, we take MMSegmentation v0.13.0 as an example, applying PVTv2 to SemanticFPN. For details see Pyramid Visio

35 Nov 30, 2022
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration

CoGAIL Table of Content Overview Installation Dataset Training Evaluation Trained Checkpoints Acknowledgement Citations License Overview This reposito

Jeremy Wang 29 Dec 24, 2022
Omniscient Video Super-Resolution

Omniscient Video Super-Resolution This is the official code of OVSR (Omniscient Video Super-Resolution, ICCV 2021). This work is based on PFNL. Datase

36 Oct 27, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022
A library for uncertainty quantification based on PyTorch

Torchuq [logo here] TorchUQ is an extensive library for uncertainty quantification (UQ) based on pytorch. TorchUQ currently supports 10 representation

TorchUQ 96 Dec 12, 2022
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)

TorchCAM: class activation explorer Simple way to leverage the class-specific activation of convolutional layers in PyTorch. Quick Tour Setting your C

F-G Fernandez 1.2k Dec 29, 2022
Face Recognition plus identification simply and fast | Python

PyFaceDetection Face Recognition plus identification simply and fast Ubuntu Setup sudo pip3 install numpy sudo pip3 install cmake sudo pip3 install dl

Peyman Majidi Moein 16 Sep 22, 2022
Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

MetaSDF: Meta-learning Signed Distance Functions Project Page | Paper | Data Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely Gordon W

Vincent Sitzmann 100 Jan 01, 2023
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 02, 2023
Pseudo-rng-app - whos needs science to make a random number when you have pseudoscience?

Pseudo-random numbers with pseudoscience rng is so complicated! Why cant we have a horoscopic, vibe-y way of calculating a random number? Why cant rng

Andrew Blance 1 Dec 27, 2021
[ACMMM 2021, Oral] Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception"

EIP: Elastic Interaction of Particles Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception", in ACMMM (Oral) 2021. By Yikai

Yikai Wang 37 Dec 20, 2022
A keras implementation of ENet (abandoned for the foreseeable future)

ENet-keras This is an implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from ENet-training (lua-t

Pavlos 115 Nov 23, 2021
Decision Transformer: A brand new Offline RL Pattern

DecisionTransformer_StepbyStep Intro Decision Transformer: A brand new Offline RL Pattern. 这是关于NeurIPS 2021 热门论文Decision Transformer的复现。 👍 原文地址: Deci

Irving 14 Nov 22, 2022