Point Cloud Denoising input segmentation output raw point-cloud valid/clear fog rain de-noised Abstract Lidar sensors are frequently used in environme

Overview

Point Cloud Denoising

input segmentation output
#9F1924 raw point-cloud #9E9E9E valid/clear #7300E6 fog #009999 rain #6EA046 de-noised

Abstract

Lidar sensors are frequently used in environment perception for autonomous vehicles and mobile robotics to complement camera, radar, and ultrasonic sensors. Adverse weather conditions are significantly impacting the performance of lidar-based scene understanding by causing undesired measurement points that in turn effect missing detections and false positives. In heavy rain or dense fog, water drops could be misinterpreted as objects in front of the vehicle which brings a mobile robot to a full stop. In this paper, we present the first CNN-based approach to understand and filter out such adverse weather effects in point cloud data. Using a large data set obtained in controlled weather environments, we demonstrate a significant performance improvement of our method over state-of-the-art involving geometric filtering.

Download Dataset

Information: Click here for registration and download.

Dataset Information

  • each channel contains a matrix with 32x400 values, ordered in layers and columns
  • the coordinate system is based on the conventions for land vehicles DIN ISO 8855 (Wikipedia)
hdf5 channels info
labels_1 groundtruth labels, 0: no label, 100: valid/clear, 101: rain, 102: fog
distance_m_1 distance in meter
intensity_1 raw intensity of the sensor
sensorX_1 x-coordinates in a projected 32x400 view
sensorY_1 y-coordinates in a projected 32x400 view
sensorZ_1 z-coordinates in a projected 32x400 view
hdf5 attributes info
dateStr date of the recording yyyy-mm-dd
timeStr timestamp of the recording HH:MM:SS
meteorologicalVisibility_m ground truth meteorological visibility in meter provided by the climate chamber
rainfallRate_mmh ground truth rainfall rate in mm/h provided by the climate chamber
# example for reading the hdf5 attributes
import h5py
with h5py.File(filename, "r", driver='core') as hdf5:
  weather_data = dict(hdf5.attrs)

Getting Started

We provide documented tools for visualization in python using ROS. Therefore, you need to install ROS and the rospy client API first.

  • install rospy
apt install python-rospy  

Then start "roscore" and "rviz" in separate terminals.

Afterwards, you can use the visualization tool:

  • clone the repository:
cd ~/workspace
git clone https://github.com/rheinzler/PointCloudDeNoising.git
cd ~/workspace/PointCloudDeNoising
  • create a virtual environment:
mkdir -p ~/workspace/PointCloudDeNoising/venv
virtualenv --no-site-packages -p python3 ~/workspace/PointCloudDeNoising/venv
  • source virtual env and install dependencies:
source ~/workspace/PointCloudDeNoising/venv/bin/activate
pip install -r requirements.txt
  • start visualization:
cd src
python visu.py

Notes:

  • We used the following label mapping for a single lidar point: 0: no label, 100: valid/clear, 101: rain, 102: fog
  • Before executing the script you should change the input path

Reference

If you find our work on lidar point-cloud de-noising in adverse weather useful for your research, please consider citing our work.:

@article{PointCloudDeNoising2020, 
  author   = {Heinzler, Robin and Piewak, Florian and Schindler, Philipp and Stork, Wilhelm},
  journal  = {IEEE Robotics and Automation Letters}, 
  title    = {CNN-based Lidar Point Cloud De-Noising in Adverse Weather}, 
  year     = {2020}, 
  keywords = {Semantic Scene Understanding;Visual Learning;Computer Vision for Transportation}, 
  doi      = {10.1109/LRA.2020.2972865}, 
  ISSN     = {2377-3774}
}

Acknowledgements

This work has received funding from the European Union under the H2020 ECSEL Programme as part of the DENSE project, contract number 692449. We thank Velodyne Lidar, Inc. for permission to publish this dataset.

Feedback/Questions/Error reporting

Feedback? Questions? Any problems or errors? Please do not hesitate to contact us!

Character Grounding and Re-Identification in Story of Videos and Text Descriptions

Character in Story Identification Network (CiSIN) This project hosts the code for our paper. Youngjae Yu, Jongseok Kim, Heeseung Yun, Jiwan Chung and

8 Dec 09, 2022
HW3 ― GAN, ACGAN and UDA

HW3 ― GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
PyTorch implementation of the Pose Residual Network (PRN)

Pose Residual Network This repository contains a PyTorch implementation of the Pose Residual Network (PRN) presented in our ECCV 2018 paper: Muhammed

Salih Karagoz 289 Nov 28, 2022
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

58 Nov 19, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
Implementing yolov4 target detection and tracking based on nao robot

Implementing yolov4 target detection and tracking based on nao robot

6 Apr 19, 2022
Learning Neural Painters Fast! using PyTorch and Fast.ai

The Joy of Neural Painting Learning Neural Painters Fast! using PyTorch and Fast.ai Blogpost with more details: The Joy of Neural Painting The impleme

Libre AI 72 Nov 10, 2022
MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Felix Wimbauer 494 Jan 06, 2023
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
TransVTSpotter: End-to-end Video Text Spotter with Transformer

TransVTSpotter: End-to-end Video Text Spotter with Transformer Introduction A Multilingual, Open World Video Text Dataset and End-to-end Video Text Sp

weijiawu 66 Dec 26, 2022
PyTorch implementation of some learning rate schedulers for deep learning researcher.

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler Visu

Soohwan Kim 59 Dec 08, 2022
Pytorch implementation of the paper: "SAPNet: Segmentation-Aware Progressive Network for Perceptual Contrastive Image Deraining"

SAPNet This repository contains the official Pytorch implementation of the paper: "SAPNet: Segmentation-Aware Progressive Network for Perceptual Contr

11 Oct 17, 2022
SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

SEOVER-Master This code is the implementation of paper: SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

4 Feb 24, 2022
A PyTorch port of the Neural 3D Mesh Renderer

Neural 3D Mesh Renderer (CVPR 2018) This repo contains a PyTorch implementation of the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushik

Daniilidis Group University of Pennsylvania 1k Jan 09, 2023
Project code for weakly supervised 3D object detectors using wide-baseline multi-view traffic camera data: WIBAM.

WIBAM (Work in progress) Weakly Supervised Training of Monocular 3D Object Detectors Using Wide Baseline Multi-view Traffic Camera Data 3D object dete

Matthew Howe 10 Aug 24, 2022
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

MIT Deep Learning This repository is a collection of tutorials for MIT Deep Learning courses. More added as courses progress. Tutorial: Deep Learning

Lex Fridman 9.5k Jan 07, 2023
Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification.

Easy Few-Shot Learning Ready-to-use code and tutorial notebooks to boost your way into few-shot image classification. This repository is made for you

Sicara 399 Jan 08, 2023
Demo code for paper "Learning optical flow from still images", CVPR 2021.

Depthstillation Demo code for "Learning optical flow from still images", CVPR 2021. [Project page] - [Paper] - [Supplementary] This code is provided t

130 Dec 25, 2022
Referring Video Object Segmentation

Awesome-Referring-Video-Object-Segmentation Welcome to starts ⭐ & comments 💹 & sharing 😀 !! - 2021.12.12: Recent papers (from 2021) - welcome to ad

Explorer 57 Dec 11, 2022
HeartRate detector with ArduinoandPython - Use Arduino and Python create a heartrate detector.

Syllabus of Contents Syllabus of Contents Introduction Of Project Features Develop With Python code introduction Installation License Developer Contac

1 Jan 05, 2022