This is the dataset and code release of the OpenRooms Dataset.

Overview

OpenRooms Dataset Release

Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker

Dataset Overview

pipeline

This is the webpage for downloading the OpenRooms dataset. We will first introduce the rendered images and various ground-truths. Later, we will introduce how to render your own images based on the OpenRooms dataset creation pipeline. For each type of data, we offer two kinds of formats, zip files and individual folders, so that users can choose whether to download the whole dataset more efficiently or download individual folders for different scenes. To download the file, we recommend the tool Rclone, otherwise users may suffer from slow downloading speed and instability. If you have any questions, please email to [email protected].

We render six versions of images for all the scenes. Those rendered results are saved in 6 folders: main_xml, main_xml1, mainDiffMat_xml, mainDiffMat_xml1, mainDiffLight_xml and mainDiffLight_xml1. All 6 versions are built with the same CAD models. main_xml, mainDiffMat_xml, mainDiffLight_xml share one set of camera views while main_xml1, mainDiffMat_xml1 and mainDiffLight_xml1 share the other set of camera views. main_xml(1) and mainDiffMat_xml(1) have the same lighting but different materials while main_xml(1) and mainDiffLight_xml(1) have the same materials but different lighting. Both the lighting and material configuration of main_xml and main_xml1 are different. We believe this configuration can potentially help us develope novel applications for image editing. Two example scenes from main_xml, mainDiffMat_xml and mainDiffLight_xml are shown in the below.

config

News: We currently only release the rendered images of the dataset. All ground-truths will be released in a few days. The dataset creation pipeline will also be released soon.

Rendered Images and Ground-truths

All rendered images and the corresponding ground-truths are saved in folder data/rendering/data/. In the following, we will detail each type of rendered data and how to read and interpret them. Two example scenes with images and all ground-truths are included in Demo and Demo.zip.

  1. Images and Images.zip: The 480 × 640 HDR images im_*.hdr, which can be read with the python command.

    im = cv2.imread('im_1.hdr', -1)[:, :, ::-1]

    We render images for main_xml(1), mainDiffMat_xml(1) and mainDiffLight_xml(1).

  2. Material and Material.zip: The 480 × 640 diffuse albedo maps imbaseColor_*.png and roughness map imroughness_*.png. Note that the diffuse albedo map is saved in sRGB space. To load it into linear RGB space, we can use the following python commands. The roughness map is saved in linear space and can be read directly.

    im = cv2.imread('imbaseColor_1.hdr')[:, :, ::-1]
    im = (im.astype(np.float32 ) / 255.0) ** (2.2)

    We only render the diffuse albedo maps and roughness maps for main_xml(1) and mainDiffMat_xml(1) because mainDiffLight_xml(1) share the same material maps with the main_xml(1).

  3. Geometry and Geometry.zip: The 480 × 640 normal maps imnomral_*.png and depth maps imdepth_*.dat. The R, G, B channel of the normal map corresponds to right, up, backward direction of the image plane. To load the depth map, we can use the following python commands.

    with open('imdepth_1.dat', 'rb') as fIn:
        # Read the height and width of depth
        hBuffer = fIn.read(4)
        height = struct.unpack('i', hBuffer)[0]
        wBuffer = fIn.read(4)
        width = struct.unpack('i', wBuffer)[0]
        # Read depth 
        dBuffer = fIn.read(4 * width * height )
        depth = np.array(
            struct.unpack('f' * height * width, dBuffer ), 
            dtype=np.float32 )
        depth = depth.reshape(height, width)

    We render normal maps for main_xml(1) and mainDiffMat_xml(1), and depth maps for main_xml(1).

  4. Mask and Mask.zip: The 480 × 460 grey scale mask immask_*.png for light sources. The pixel value 0 represents the region of environment maps. The pixel value 0.5 represents the region of lamps. Otherwise, the pixel value will be 1. We render the ground-truth masks for main_xml(1) and mainDiffLight_xml(1).

  5. SVLighting: The (120 × 16) × (160 × 32) per-pixel environment maps imenv_*.hdr. The spatial resolution is 120 x 160 while the environment map resolution is 16 x 32. To read the per-pixel environment maps, we can use the following python commands.

    # Read the envmap of resolution 1920 x 5120 x 3 in RGB format 
    env = cv2.imread('imenv_1', -1)[:, :, ::-1]
    # Reshape and permute the per-pixel environment maps
    env = env.reshape(120, 16, 160, 32, 3)
    env = env.transpose(0, 2, 1, 3, 4)

    We render per-pixel environment maps for main_xml(1), mainDiffMat_xml(1) and mainDiffLight_xml(1). Since the total size of per-pixel environment maps is 4.0 TB, we do not provide an extra .zip format for downloading. Please consider using the tool Rclone if you hope to download all the per-pixel environment maps.

  6. SVSG and SVSG.zip: The ground-truth spatially-varying spherical Gaussian (SG) parameters imsgEnv_*.h5, computed from this optimization code. We generate the ground-truth SG parameters for main_xml(1), mainDiffMat_xml(1) and mainDiffLight_xml(1). For the detailed format, please refer to the optimization code.

  7. Shading and Shading.zip: The 120 × 160 diffuse shading imshading_*.hdr computed by intergrating the per-pixel environment maps. We render shading for main_xml(1), mainDiffMat_xml(1) and mainDiffLight_xml(1).

  8. SVLightingDirect and SVLightingDirect.zip: The (30 × 16) × (40 × 32) per-pixel environment maps with direct illumination imenvDirect_*.hdr only. The spatial resolution is 30 × 40 while the environment maps resolution is 16 × 32. The direct per-pixel environment maps can be load the same way as the per-pixel environment maps. We only render direct per-pixel environment maps for main_xml(1) and mainDiffLight_xml(1) because the direct illumination of mainDiffMat_xml(1) is the same as main_xml(1).

  9. ShadingDirect and ShadingDirect.zip: The 120 × 160 direct shading imshadingDirect_*.rgbe. To load the direct shading, we can use the following python command.

    im = cv2.imread('imshadingDirect_1.rgbe', -1)[:, :, ::-1]

    Again, we only render direct shading for main_xml(1) and mainDiffLight_xml(1)

  10. SemanticLabel and SemanticLabel.zip: The 480 × 640 semantic segmentation label imsemLabel_*.npy. We provide semantic labels for 45 classes of commonly seen objects and layout for indoor scenes. The 45 classes can be found in semanticLabels.txt. We only render the semantic labels for main_xml(1).

  11. LightSource and LightSource.zip: The light source information, including geometry, shadow and direct shading of each light source. In each scene directory, light_x directory corresponds to im_x.hdr, where x = 0, 1, 2, 3 ... In each light_x directory, you will see files with numbers in their names. The numbers correspond to the light source ID, i.e. if the IDs are from 0 to 4, then there are 5 light sources in this scene.

    • Geometry: We provide geometry annotation for windows and lamps box_*.dat for main_xml(1) only. To read the annotation, we can use the following python commmands.
      with open('box_0.dat', 'rb')  as fIn:
          info = pickle.load(fIn )
      There are 3 items saved in the dictionary, which we list blow.
      • isWindow: True if the light source is a window, false if the light source is a lamp.
      • box3D: The 3D bounding box of the light source, including center center, orientation xAxis, yAxis, zAxis and size xLen, yLen, zLen.
      • box2D: The 2D bounding box of the light source on the image plane x1, y1, x2, y2.
    • Mask: The 120 × 160 2D binary masks for light sources mask*.png. We only provide the masks for main_xml(1).
    • Direct shading: The 120 × 160 direct shading for each light source imDS*.rgbe. We provide the direction shading for main_xml(1) and mainDiffLight_xml(1).
    • Direct shading without occlusion: The 120 × 160 direct shading with outocclusion for each light source imNoOcclu*.rgbe. We provide the direction shading for main_xml(1) and mainDiffLight_xml(1).
    • Shadow: The 120 × 160 shadow maps for each light source imShadow*.png. We render the shadow map for main_xml(1) only.
  12. Friction and Friction.zip: The friction coefficients computed from our SVBRDF following the method proposed by Zhang et al. We compute the friction coefficients for main_xml(1) and mainDiffLight_xml(1)

Dataset Creation

  1. GPU renderer: The Optix-based GPU path tracer for rendering. Please refer to the github repository for detailed instructions.
  2. Tileable texture synthesis: The tielable texture synthesis code to make sure that the SVBRDF maps are tileable. Please refer to the github repository for more details.
  3. Spherical gaussian optimization: The code to fit per-pixel environment map with spherical Gaussian lobes, using LBFGS optimization. Please refer to the github repository for detailed instructions.

The CAD models, environment maps, materials and code required to recreate the dataset will be released soon.

Applications

  1. Inverse Rendering: Trained on our dataset, we achieved state-of-the-arts on some inverse rendering metrics, especially the lighting estimation. Please refer to our github repository for the training and testing code.
  2. Robotics: Our robotics applications will come soon.

Related Datasets

The OpenRooms dataset is built on the datasets listed below. We thank their creators for the excellent contribution. Please refer to prior datasets for license issues and terms of use if you hope to use them to create your own dataset.

  1. ScanNet dataset: The real 3D scans of indoor scenes.
  2. Scan2cad dataset: The alignment of CAD models to the scanned point clouds.
  3. Laval outdoor lighting dataset: HDR outdoor environment maps
  4. HDRI Haven lighting dataset: HDR outdoor environment maps
  5. PartNet dataset: CAD models
  6. Adobe Stock: High-quality microfacet SVBRDF texture maps. Please license materials from the Adobe website.
Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectrum sensing.

Deep-Learning-based-Spectrum-Sensing Use MATLAB to simulate the signal and extract features. Use PyTorch to build and train deep network to do spectru

10 Dec 14, 2022
Point detection through multi-instance deep heatmap regression for sutures in endoscopy

Suture detection PyTorch This repo contains the reference implementation of suture detection model in PyTorch for the paper Point detection through mu

artificial intelligence in the area of cardiovascular healthcare 3 Jul 16, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhancement

CBREN This is the Pytorch implementation for our IEEE TCSVT paper : CBREN: Convolutional Neural Networks for Constant Bit Rate Video Quality Enhanceme

Zhao Hengrun 3 Nov 04, 2022
Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective

Towards Calibrated Model for Long-Tailed Visual Recognition from Prior Perspective Zhengzhuo Xu, Zenghao Chai, Chun Yuan This is the PyTorch implement

Sincere 16 Dec 15, 2022
Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
Scalable Optical Flow-based Image Montaging and Alignment

SOFIMA SOFIMA (Scalable Optical Flow-based Image Montaging and Alignment) is a tool for stitching, aligning and warping large 2d, 3d and 4d microscopy

Google Research 16 Dec 21, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
Dual Attention Network for Scene Segmentation (CVPR2019)

Dual Attention Network for Scene Segmentation(CVPR2019) Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu Introduction W

Jun Fu 2.2k Dec 28, 2022
Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer"

StyleAttack Code and data of the EMNLP 2021 paper "Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer" Prepare Pois

THUNLP 19 Nov 20, 2022
This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).

MoEBERT This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022). Installation Create an

Simiao Zuo 34 Dec 24, 2022
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience

Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience This repository is the official implementation of [https://www.bi

Eulerlab 6 Oct 09, 2022
PyElastica is the Python implementation of Elastica, an open-source software for the simulation of assemblies of slender, one-dimensional structures using Cosserat Rod theory.

PyElastica PyElastica is the python implementation of Elastica: an open-source project for simulating assemblies of slender, one-dimensional structure

Gazzola Lab 105 Jan 09, 2023
ROS support for Velodyne 3D LIDARs

Overview Velodyne1 is a collection of ROS2 packages supporting Velodyne high definition 3D LIDARs3. Warning: The master branch normally contains code

ROS device drivers 543 Dec 30, 2022
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
PyTorch Implementation of DSB for Score Based Generative Modeling. Experiments managed using Hydra.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling This repository contains the implementation for the paper Diffusion

James Thornton 50 Jan 03, 2023
基于YoloX目标检测+DeepSort算法实现多目标追踪Baseline

项目简介: 使用YOLOX+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。 代码地址(欢迎star): https://github.com/Sharpiless/yolox-deepsort/ 最终效果: 运行demo: python demo

114 Dec 30, 2022
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
Semi-Supervised Learning for Fine-Grained Classification

Semi-Supervised Learning for Fine-Grained Classification This repo contains the code of: A Realistic Evaluation of Semi-Supervised Learning for Fine-G

25 Nov 08, 2022
ECLARE: Extreme Classification with Label Graph Correlations

ECLARE ECLARE: Extreme Classification with Label Graph Correlations @InProceedings{Mittal21b, author = "Mittal, A. and Sachdeva, N. and Agrawal

Extreme Classification 35 Nov 06, 2022