Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

Overview

One Thing One Click

One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021)

Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

This code is based on PointGroup https://github.com/llijiang/PointGroup

Authors: Zhengzhe Liu, Xiaojuan Qi, Chi-Wing Fu

Installation

Requirements

  • Python 3.7.0
  • Pytorch 1.3.0
  • CUDA 10.1

Virtual Environment

conda create -n pointgroup python==3.7
source activate pointgroup

Install PointGroup

(1) Clone the PointGroup repository.

git clone https://github.com/liuzhengzhe/One-Thing-One-Click --recursive 
cd One-Thing-One-Click

(2) Install the dependent libraries.

pip install -r requirements.txt
conda install -c bioconda google-sparsehash 

(3) For the SparseConv, we apply the implementation of spconv. The repository is recursively downloaded at step (1). We use the version 1.0 of spconv.

Note: The author of PointGroup further modified spconv\spconv\functional.py to make grad_output contiguous. Make sure you use our modified spconv.

  • To compile spconv, firstly install the dependent libraries.
conda install libboost
conda install -c daleydeng gcc-5 # need gcc-5.4 for sparseconv

Add the $INCLUDE_PATH$ that contains boost in lib/spconv/CMakeLists.txt. (Not necessary if it could be found.)

include_directories($INCLUDE_PATH$)
  • Compile the spconv library.
cd lib/spconv
python setup.py bdist_wheel
  • Run cd dist and use pip to install the generated .whl file.

(4) Compile the pointgroup_ops library.

cd lib/pointgroup_ops
python setup.py develop

If any header files could not be found, run the following commands.

python setup.py build_ext --include-dirs=$INCLUDE_PATH$
python setup.py develop

$INCLUDE_PATH$ is the path to the folder containing the header files that could not be found.

Data Preparation

  • Download the ScanNet v2 dataset.

  • Put the data in the corresponding folders.

  • Put the file scannetv2-labels.combined.tsv in the data/ folder.

  • Change the path in prepare_data_otoc.py Line 20.

cd data/
python prepare_data_otoc.py 
  • Split the generated files into the data/train_weakly and data/val_weakly folders according to the ScanNet v2 train/val split.

Pretrained Model

We provide a pretrained model trained on ScanNet v2 dataset. Download it here. Its performance on ScanNet v2 validation set is 71.94 mIoU.

Inference and Evaluation

(1) 3D U-Net Evaluation

set the data_root in config/pointgroup_run1_scannet.yaml

cd 3D-U-Net
python test.py --config config/pointgroup_run1_scannet.yaml --pretrain pointgroup_run1_scannet-000001250.pth

Its performance on ScanNet v2 validation set is 68.96 mIoU.

(2) Relation Net Evaluation

cd relation
python test.py --config config/pointgroup_run1_scannet.yaml --pretrain pointgroup_run1_scannet-000002891_weight.pth

(3) Overall Evaluation

cd merge
python test.py --config config/pointgroup_run1_scannet.yaml

Self Training

(1) Train 3D U-Net

set the data_root/dataset in config/pointgroup_run1_scannet.yaml

cd 3D-U-Net
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml 

(2) Generate features and predictions of 3D U-Net

CUDA_VISIBLE_DEVICES=0 python test_train.py --config config/pointgroup_run1_scannet.yaml --pretrain $PATH_TO_THE_MODEL$.pth

(3) Train Relation Net

set the data_root/dataset in config/pointgroup_run1_scannet.yaml

cd relation
CUDA_VISIBLE_DEVICES=0 python train.py --config config/pointgroup_run1_scannet.yaml 

(4) Generate features and predictions of Relation Net

CUDA_VISIBLE_DEVICES=0 python test_train.py --config config/pointgroup_run1_scannet.yaml --pretrain $PATH_TO_THE_MODEL$_weight.pth

(5) Merge the Results via Graph Propagation

cd merge
CUDA_VISIBLE_DEVICES=0 python test_train.py --config config/pointgroup_run1_scannet.yaml

(6) Repeat from (1) to (5) for self-training for 3 to 5 times

Acknowledgement

This repo is built upon several repos, e.g., PointGrouop, SparseConvNet, spconv and ScanNet.

Contact

If you have any questions or suggestions about this repo, please feel free to contact me ([email protected]).

2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案

2020CCF-NER 2020 CCF大数据与计算智能大赛-非结构化商业文本信息中隐私信息识别-第7名方案 bert base + flat + crf + fgm + swa + pu learning策略 + clue数据集 = test1单模0.906 词向量

67 Oct 19, 2022
PyTorch implementation of TSception V2 using DEAP dataset

TSception This is the PyTorch implementation of TSception V2 using DEAP dataset in our paper: Yi Ding, Neethu Robinson, Su Zhang, Qiuhao Zeng, Cuntai

Yi Ding 27 Dec 15, 2022
Weakly Supervised Scene Text Detection using Deep Reinforcement Learning

Weakly Supervised Scene Text Detection using Deep Reinforcement Learning This repository contains the setup for all experiments performed in our Paper

Emanuel Metzenthin 3 Dec 16, 2022
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Clova AI Research 34 Apr 13, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
Bu repo SAHI uygulamasını mantığını öğreniyoruz.

SAHI-Learn: SAHI'den Beraber Kodlamak İster Misiniz Herkese merhabalar ben Kadir Nar. SAHI kütüphanesine gönüllü geliştiriciyim. Bu repo SAHI kütüphan

Kadir Nar 11 Aug 22, 2022
Object Detection Projekt in GKI WS2021/22

tfObjectDetection Object Detection Projekt with tensorflow in GKI WS2021/22 Docker Container: docker run -it --name --gpus all -v path/to/project:p

Tim Eggers 1 Jul 18, 2022
modelvshuman is a Python library to benchmark the gap between human and machine vision

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with

Bethge Lab 244 Jan 03, 2023
ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet, ICCV 2021 Update: 2021/03/11: update our new results. Now our T2T-ViT-14 w

YITUTech 1k Dec 31, 2022
Dieser Scanner findet Websites, die nicht direkt in Suchmaschinen auftauchen, aber trotzdem erreichbar sind.

Deep Web Scanner Dieses Script findet Websites, die per IPv4-Adresse erreichbar sind und speichert deren Metadaten. Die Ausgabe im Terminal wird nach

Alex K. 30 Nov 18, 2022
The Python code for the paper A Hybrid Quantum-Classical Algorithm for Robust Fitting

About The Python code for the paper A Hybrid Quantum-Classical Algorithm for Robust Fitting The demo program was only tested under Conda in a standard

Anh-Dzung Doan 5 Nov 28, 2022
Code to compute permutation and drop-column importances in Python scikit-learn models

Feature importances for scikit-learn machine learning models By Terence Parr and Kerem Turgutlu. See Explained.ai for more stuff. The scikit-learn Ran

Terence Parr 537 Dec 31, 2022
Training, generation, and analysis code for Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics

Location-Aware Generative Adversarial Networks (LAGAN) for Physics Synthesis This repository contains all the code used in L. de Oliveira (@lukedeo),

Deep Learning for HEP 57 Oct 22, 2022
Answer a series of contextually-dependent questions like they may occur in natural human-to-human conversations.

SCAI-QReCC-21 [leaderboards] [registration] [forum] [contact] [SCAI] Answer a series of contextually-dependent questions like they may occur in natura

19 Sep 28, 2022
Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

Official code release for ICCV 2021 paper SNARF: Differentiable Forward Skinning for Animating Non-rigid Neural Implicit Shapes.

235 Dec 26, 2022
Pytorch Implementation for Dilated Continuous Random Field

DilatedCRF Pytorch implementation for fully-learnable DilatedCRF. If you find my work helpful, please consider our paper: @article{Mo2022dilatedcrf,

DunnoCoding_Plus 3 Nov 13, 2022
TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling This is the official code release for the paper 'TiP-Adapter: Training-fre

peng gao 189 Jan 04, 2023
A PyTorch implementation of the Relational Graph Convolutional Network (RGCN).

Torch-RGCN Torch-RGCN is a PyTorch implementation of the RGCN, originally proposed by Schlichtkrull et al. in Modeling Relational Data with Graph Conv

Thiviyan Singam 66 Nov 30, 2022
Implementation of U-Net and SegNet for building segmentation

Specialized project Created by Katrine Nguyen and Martin Wangen-Eriksen as a part of our specialized project at Norwegian University of Science and Te

Martin.w-e 3 Dec 07, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022