Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

Related tags

Deep Learningcorda
Overview

CorDA

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation alt text

Prerequisite

Please create and activate the following conda envrionment

# It may take several minutes for conda to solve the environment
conda env create -f environment.yml
conda activate corda 

Code was tested on a V100 with 16G Memory.

Train a CorDA model

# Train for the SYNTHIA2Cityscapes task
bash run_synthia_stereo.sh
# Train for the GTA2Cityscapes task
bash run_gta.sh

Test the trained model

bash shells/eval_syn2city.sh
bash shells/eval_gta2city.sh

Pre-trained models are provided (Google Drive). Please put them in ./checkpoint.

  • The provided SYNTHIA2Cityscapes model achieves 56.3 mIoU (16 classes) at the end of the training.
  • The provided GTA2Cityscapes model achieves 57.7 mIoU (19 classes) at the end of the training.

Reported Results on SYNTHIA2Cityscapes

Method mIoU*(13) mIoU(16)
CBST 48.9 42.6
FDA 52.5 -
DADA 49.8 42.6
DACS 54.8 48.3
CorDA 62.8 55.0

Citation

Please cite our work if you find it useful.

@article{wang2021domain,
  title={Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
  author={Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Fink, Olga and Van Gool, Luc},
  journal={arXiv preprint arXiv:2104.13613},
  year={2021}
}

Acknowledgement

  • DACS is used as our codebase and our DA baseline official
  • SFSU as the source of stereo Cityscapes depth estimation Official

Data links

For questions regarding the code, please contact [email protected] .

Comments
  • Training on a custom dataset without ground truth label

    Training on a custom dataset without ground truth label

    From what I understand after reading your paper, you do not need ground truth label data on the target domain to train the pseudo labels. However, when I look at cityscapes_loader, it seems I need to supply the ground truth seg maps as well.

    I am trying to train the network on a custom dataset (that only depth maps, and ground truth seg map only on the source domain), but it looks I cannot get away without providing it. Do you have any thoughts on this?

    opened by chophilip21 6
  • Coufusion about the 'depth' of cityscapes

    Coufusion about the 'depth' of cityscapes

    Hello, nice work but i meet some question.

    in 'data/cityscapes_loader.py' line 181-183:

    depth = cv2.imread(depth_path, flags=cv2.IMREAD_ANYDEPTH).astype(np.float32) / 256. + 1. if depth.shape != lbl.shape: depth = cv2.resize(depth, lbl.shape[::-1], interpolation=cv2.INTER_NEAREST) Monocular depth: in disparity form 0 - 65535

    (1) Why the depth is calculated from x/256+1
    (2) is it the depth or the disparity ? In the official doc of cityscapes, it say disparity = (x-1)/256

    Thank you!

    opened by ganyz 6
  • gta2city

    gta2city

    When I revisited the performance of your GTA2City, I found that the MIOU could only reach about 54.8 after 250,000 iterations. I didn't change anything except the 10.2 version of CUDA. Could you please provide the training log of your GTA2City? Thanks a lot!!

    opened by xiaoachen98 6
  • Question about the pretrained parameters of backbone

    Question about the pretrained parameters of backbone

    Thanks for sharing the code, and it brings the amazing improvement for this filed.

    I notice that you have used backbone with parameters pretrained on MSCOCO which is the same with DACS, and have you tried backbone pretrained on ImageNet? If yes, could you please provide the corresponding results?

    opened by super233 4
  • About intrinsics used in GTA depth estimation

    About intrinsics used in GTA depth estimation

    Thanks a lot for your fantastic work. When I followed your depth estimation mentioned in issue#7, I went to the https://playing-for-benchmarks.org. However,its camera calibration doesn't include intrinsic matrix directly, which is needed in Monodepth2 depth estimation. Would you kindly share the intrinsic of GTA you used in depth estimation? Or may I know a way to convert GTA's projection matrix to intrinsic matrix?

    opened by Ichinose0code 2
  • Why does the class Train have 0 mIoU, What may could happen

    Why does the class Train have 0 mIoU, What may could happen

    I download your pretrained model, and start demo But I find train iou 0.0

    (yy_corda) [email protected]:/media/ailab/data/yy/corda$ bash shells/eval_gta2city.sh
    ./checkpoint/gta
    Found 500 val images
    Evaluating, found 500 batches.
    100 processed
    200 processed
    300 processed
    400 processed
    500 processed
    class  0 road         IU 94.81
    class  1 sidewalk     IU 62.18
    class  2 building     IU 88.03
    class  3 wall         IU 33.09
    class  4 fence        IU 43.51
    class  5 pole         IU 39.93
    class  6 traffic_light IU 49.46
    class  7 traffic_sign IU 54.68
    class  8 vegetation   IU 88.01
    class  9 terrain      IU 47.67
    class 10 sky          IU 89.22
    class 11 person       IU 68.22
    class 12 rider        IU 39.21
    class 13 car          IU 90.25
    class 14 truck        IU 51.43
    class 15 bus          IU 58.37
    class 16 train        IU 0.00
    class 17 motorcycle   IU 40.38
    class 18 bicycle      IU 57.42
    meanIOU: 0.5767768805758403
    

    I train my model on it, and test eval_syn2city.py. Here are 3 classes Iou 0.0 because missing classed in source domain. but I download pretrained model ,and run eval_gta2city.sh still miss one class train. So, I want to know why. Is it may train class didn't appear city datasets? So it's IOU is 0.

    opened by yuheyuan 2
  • How to obtain your depth datasets?

    How to obtain your depth datasets?

    Hi, thanks for your great work!

    It would be great if you can elaborate more on how you obtain the monocular depth estimation.

    I understand that you've uploaded the dataset, but it would be really helpful if I know exactly how you've done it.

    From your paper, in the ablation study part: "We would like to highlight that for both stereo and monocular depth estimations, only stereo pairs or image sequences from the same dataset are used to train and generate the pseudo depth estimation model. As no data from external datasets is used, and stereo pairs and image sequences are relatively easy to obtain, our proposal of using self-supervised depth have the potential to be effectively realized in real-world applications."

    So I image you get your monocular depth pseudo ground truth by:

    1. Downloading target domain videos (here Cityscapes. Btw, where do you get Cityscapes videos?)
    2. Train a Monodepth2 model on those videos (for how long?)
    3. Use the model to get pseudo ground truth Then repeat to the source domain (GTA 5 or Synthia)

    Am I getting it right? And is there any more important points you want to highlight when calculating such depth labels?

    Regards, Tu

    opened by tudragon154203 2
  • How to continue train?

    How to continue train?

    when I use script llike

    CUDA_VISIBLE_DEVICES=0 python3 -u trainUDA_gta.py --config ./configs/configUDA_gta2city.json --name UDA-gta --resume /saved/DeepLabv2-depth-gtamono-cityscapestereo/05-03_02-13-UDA-gta/checkpoint-iter95000.pth | tee ./gta-corda.log

    It would run again but the new checkpoint would be saved.

    opened by ygjwd12345 2
  • Warning:optimizer contains a parameter group with duplicate parameters

    Warning:optimizer contains a parameter group with duplicate parameters

    I follow you code, and train a model. But it results may not meet the need.

    I eval the model you share .

    bash shells/eval_syn2city.sh
    

    your share model. in syn2city : 19 classes : meanIout: 0.4771 I train the model: in syn2city : 19 classes : meanIout: only: 0.46.7

    in the train. I find the warning, So I want to know if it may cause the result drop.

    /home/ailab/anaconda3/envs/yy_CORDA/lib/python3.7/site-packages/torch/optim/sgd.py:68: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
      super(SGD, self).__init__(params, defaults)
    D_init tensor(134.8489, device='cuda:0', grad_fn=<DivBackward0>) D tensor(134.5171, device='cuda:0', grad_fn=<DivBackward0>)
    
    opened by yuheyuan 1
  • May deeplabv2_synthia.py have extra space symbol

    May deeplabv2_synthia.py have extra space symbol

    if the forward code, return out ,an extra space symbol

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    

    this is the code in your code

    class Classifier_Module(nn.Module):
    
        def __init__(self, dilation_series, padding_series, num_classes):
            super(Classifier_Module, self).__init__()
            self.conv2d_list = nn.ModuleList()
            for dilation, padding in zip(dilation_series, padding_series):
                self.conv2d_list.append(nn.Conv2d(256, num_classes, kernel_size=3, stride=1, padding=padding, dilation=dilation, bias = True))
    
            for m in self.conv2d_list:
                m.weight.data.normal_(0, 0.01)
    
        def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
                return out
    
    

    I this this forward is possible.beaceuse your code, use list contain four elements,if return out have space, this may do only twice without fourth

       def forward(self, x):
            out = self.conv2d_list[0](x)
            for i in range(len(self.conv2d_list)-1):
                out += self.conv2d_list[i+1](x)
            return out
    
       self.__make_pred_layer(Classifier_Module,[6,12,18,24],[6, 12,18, 24],NUM_OUTPUT[task]
    
       def _make_pred_layer(self,block, dilation_series, padding_series,num_classes):
            return block(dilation_series,padding_series,num_classes)
    
    opened by yuheyuan 1
  • checkpoints links fail

    checkpoints links fail

    I can't download the checkpoints file from your links, when click into the google drive, The file size is shown to be 2GB, but it was only 0B when downloaded

    opened by xiaoachen98 0
Owner
Qin Wang
PhD student @ ETH Zürich
Qin Wang
MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions

MVS2D: Efficient Multi-view Stereo via Attention-Driven 2D Convolutions Project Page | Paper If you find our work useful for your research, please con

96 Jan 04, 2023
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
PlenOctrees: NeRF-SH Training & Conversion

PlenOctrees Official Repo: NeRF-SH training and conversion This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting

Alex Yu 323 Dec 29, 2022
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
GuideDog is an AI/ML-based mobile app designed to assist the lives of the visually impaired, 100% voice-controlled

Guidedog Authors: Kyuhee Jo, Steven Gunarso, Jacky Wang, Raghav Sharma GuideDog is an AI/ML-based mobile app designed to assist the lives of the visua

Kyuhee Jo 5 Nov 24, 2021
Based on the given clinical dataset, Predict whether the patient having Heart Disease or Not having Heart Disease

Heart_Disease_Classification Based on the given clinical dataset, Predict whether the patient having Heart Disease or Not having Heart Disease Dataset

Ashish 1 Jan 30, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
TensorFlow (Python API) implementation of Neural Style

neural-style-tf This is a TensorFlow implementation of several techniques described in the papers: Image Style Transfer Using Convolutional Neural Net

Cameron 3.1k Jan 02, 2023
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 04, 2023
A python module for configuration of block devices

Blivet is a python module for system storage configuration. CI status Licence See COPYING Installation From Fedora repositories Blivet is available in

78 Dec 14, 2022
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
Video-based open-world segmentation

UVO_Challenge Team Alpes_runner Solutions This is an official repo for our UVO Challenge solutions for Image/Video-based open-world segmentation. Our

Yuming Du 84 Dec 22, 2022
Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images

Multimodal Co-Attention Transformer (MCAT) for Survival Prediction in Gigapixel Whole Slide Images [ICCV 2021] © Mahmood Lab - This code is made avail

Mahmood Lab @ Harvard/BWH 63 Dec 01, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
This is a simple plugin for Vim that allows you to use OpenAI Codex.

🤖 Vim Codex An AI plugin that does the work for you. This is a simple plugin for Vim that will allow you to use OpenAI Codex. To use this plugin you

Tom Dörr 195 Dec 28, 2022
Code for paper "ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation"

ASAP-Net This project implements ASAP-Net of paper ASAP-Net: Attention and Structure Aware Point Cloud Sequence Segmentation (BMVC2020). Overview We i

Hanwen Cao 26 Aug 25, 2022
SubOmiEmbed: Self-supervised Representation Learning of Multi-omics Data for Cancer Type Classification

SubOmiEmbed: Self-supervised Representation Learning of Multi-omics Data for Cancer Type Classification

Sayed Hashim 3 Nov 15, 2022
LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs

LERP : Label-dependent and event-guided interpretable disease risk prediction using EHRs This is the code for the LERP. Dataset The dataset used is MI

5 Jun 18, 2022
Code for SALT: Stackelberg Adversarial Regularization, EMNLP 2021.

SALT: Stackelberg Adversarial Regularization Code for Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach, EMNLP 2021. R

Simiao Zuo 10 Jan 10, 2022