Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Related tags

Deep LearningDA-VSN
Overview

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Updates

Paper

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Dayan Guan, Jiaxing Huang, Xiao Aoran, Shijian Lu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
International Conference on Computer Vision, 2021.

If you find this code useful for your research, please cite our paper:

@inproceedings{guan2021domain,
  title={Domain adaptive video segmentation via temporal consistency regularization},
  author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8053--8064},
  year={2021}
}

Abstract

Video semantic segmentation is an essential task for the analysis and understanding of videos. Recent efforts largely focus on supervised video segmentation by learning from fully annotated data, but the learnt models often experience clear performance drop while applied to videos of a different domain. This paper presents DA-VSN, a domain adaptive video segmentation network that addresses domain gaps in videos by temporal consistency regularization (TCR) for consecutive frames of target-domain videos. DA-VSN consists of two novel and complementary designs. The first is cross-domain TCR that guides the prediction of target frames to have similar temporal consistency as that of source frames (learnt from annotated source data) via adversarial learning. The second is intra-domain TCR that guides unconfident predictions of target frames to have similar temporal consistency as confident predictions of target frames. Extensive experiments demonstrate the superiority of our proposed domain adaptive video segmentation network which outperforms multiple baselines consistently by large margins.

Installation

  1. Conda enviroment:
conda create -n DA-VSN python=3.6
conda activate DA-VSN
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
  1. Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
  1. Clone the repo:
git clone https://github.com/Dayan-Guan/DA-VSN.git
pip install -e ./DA-VSN

Preparation

  1. Dataset:
DA-VSN/data/Cityscapes/                       % Cityscapes dataset root
DA-VSN/data/Cityscapes/leftImg8bit_sequence   % leftImg8bit_sequence_trainvaltest
DA-VSN/data/Cityscapes/gtFine                 % gtFine_trainvaltest
DA-VSN/data/Viper/                            % VIPER dataset root
DA-VSN/data/Viper/train/img                   % Modality: Images; Frames: *[0-9]; Sequences: 00-77; Format: jpg
DA-VSN/data/Viper/train/cls                   % Modality: Semantic class labels; Frames: *0; Sequences: 00-77; Format: png
DA-VSN/data/SynthiaSeq/                      % SYNTHIA-Seq dataset root
DA-VSN/data/SynthiaSeq/SEQS-04-DAWN          % SYNTHIA-SEQS-04-DAWN
  1. Pre-trained models: Download pre-trained models and put in DA-VSN/pretrained_models

Optical Flow Estimation

  • For quick preparation: Download the optical flow estimated from Cityscapes-Seq validation set here and unzip in DA-VSN/data
  1. Clone the flownet2-pytorch:
git clone https://github.com/NVIDIA/flownet2-pytorch.git
  1. Download pre-trained FlowNet2 and put in flownet2-pytorch/pretrained_models
DA-VSN/data/Cityscapes_val_optical_flow_scale512/  % unzip Cityscapes_val_optical_flow_scale512.zip
  1. Use the flownet2-pytorch to estimate optical flow

Evaluation on Pretrained Models

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python test.py --cfg configs/davsn_viper2city_pretrained.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python test.py --cfg configs/davsn_syn2city_pretrained.yml

Training and Testing

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python train.py --cfg configs/davsn_viper2city.yml
python test.py --cfg configs/davsn_viper2city.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python train.py --cfg configs/davsn_syn2city.yml
python test.py --cfg configs/davsn_syn2city.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT and flownet2-pytorch.

Contact

If you have any questions, please contact: [email protected]

Comments
  • Optical flow is not used for propagating

    Optical flow is not used for propagating

    Hi, author. I have two questions. The first is I find that you didn't use flow to propogate previous frame to current frame. You just use it as a limitation that the pixel appeared in both cf and kf will be retained. This is unreasonable. image And I refine the code using resample2D to warp kf to cf, but the result only improve a little.

    The second question is that I try to train DAVSN for 3 times on 1080Ti and 2080Ti following the setting you gave, but I only get 46 mIoU which is 2 point less than you.

    opened by EDENpraseHAZARD 5
  • Question on Synthia-seq dataset

    Question on Synthia-seq dataset

    Dear authors,

    Thank you for your great work. I have several questions about the synthia-seq->cityscape-seq adaptation. The first one is about the scale of training data. It seems like compared with the VIPER dataset, synthia-seq only contains one labeled video with 850 frames in total. Is that true? And the second question is that 11 classes are reported the Table 4, but in the dataloader of synthia-seq, 12 classes are used. So, I'm not sure whether the fence class is considered during adaptation or not. https://github.com/Dayan-Guan/DA-VSN/blob/d110ff70dacec4156a3787eb49e7f2448dfb91a5/davsn/dataset/SynthiaSeq.py#L11

    Thanks in advance for your help!

    opened by xyIsHere 3
  • Details of SYNTHIA-Seq dataset

    Details of SYNTHIA-Seq dataset

    Hi author, I have downloaded SYNTHIA-Seq, but I found there are 'Stereo_Left' and 'Stereo_Right' folders. And each contains 'Omni_B', 'Omni_F', 'Omni_L' and 'Omni_R'. I wonder which one is used for training.

    opened by EDENpraseHAZARD 2
  • Could you please provide 'estimated_optical_flow' for training DA-VSN

    Could you please provide 'estimated_optical_flow' for training DA-VSN

    Hi @Dayan-Guan , thank you for open-sourcing your work!

    I am trying to follow this work. For training DA-VSN from scratch, the optical flows (for the 3 datasets used in your paper) estimated by FlowNet2 are needed. However, the instruction in your README only includes the evaluation part. I also see from the recent issues that you have provided the code and more instructions for the training part. But the code is not a complete one I guess so I cannot generate the optical flows with it.

    Could you please provide your generated optical flows for all 3 datasets used in your paper? It would save us time. Or could you please have a look again at the provided 'Code_for_optical_flow_estimation'? So that it is runnable for generating optical flows on our own.

    Thanks in advance!

    Regards

    opened by ldkong1205 1
  • In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    Hello! I really enjoy reading your work!! At the same time, I encountered a problem in the operation of train_video_UDA.py

    In line 251 trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), Variable trg_prob is the prediction of trg_img_b_wk, and trg_img_b_wk is obtained by trg_img_b based on a certain probability of flip, but trg_flow_warp does not seem to be flipped, We consider such a situation, If trg_img_b_wk is fliped, trg_flow_warp is not flipped, Then trg_prob_warp and trg_img_d_st do not seem consistent? Because the image flips, but the optical flow does not flip. Although the trg_pl in line 256~258 is fliped.

    Chinese discription of my question: 在第251行, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), 变量trg_prob是trg_img_b_wk的语义分割预测, 而trg_img_b_wk是由trg_img_b根据一定概率flip得到的, 但 trg_flow_warp似乎没有进行翻转, 我们考虑这样一种情况, 如果trg_img_b_wk经过了flip处理, 那么trg_prob_warp和trg_img_d_st的语义貌似不是一致的?因为图像flip了但光流图没有flip。 尽管在第256行对trg_pl进行了flip操作

    opened by zhe-juanz 0
  • Some questions about data loading

    Some questions about data loading

    Hi, This is a very enlightening work!!! @xing0047 @Dayan-Guan I want to ask a question~

    When I use./TPS/tps/scripts/train.py to read SynthiaSeq or ViperSeq data, I debug the code and find the following phenomena:

    I tried to print some variables of __ getitem__ () ,

    When the shuffle of source_loader = data.DataLoader() is set to False, and the batch_size=cfg.TRAIN.BATCH_SIZE_SOURCE is set to 1,

    1. It is found that although the batch_ Size=1, but 4 pictures and the first frame corresponding to them are loaded at one time, Instead of 1 picture and the previous frame.

    2. At the same time, it is found that 4 loaded pictures are disordered, such as 2-1-3-4, rather than 1-2-3-4, it seems to violate the settings of shuffle.

    Could you please kindly explain my doubt? Thank you very much!!

    The print code are as follows:

    111

    The print results are as follows,which the order of each run of print is different:

    ---index--- 1 ---index--- 0 ---index--- 2 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png ---index--- 3 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000004.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000004.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000000.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000000.png

    opened by zhe-juanz 0
  • Regarding Synthia-Seq Dataset

    Regarding Synthia-Seq Dataset

    I really enjoyed reading your work. I have a question regarding the synthia-seq dataset. In the paper you mention that you have used 8000 synthesized video frames, but in the github the Synthia-Seq Dawn contain only 850 images. Can you please clarify this ambiguity. Thank you. image

    opened by Ihsan149 0
  • Optical flow for training

    Optical flow for training

    Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.

    opened by EDENpraseHAZARD 11
This repo tries to recognize faces in the dataset you created

YÜZ TANIMA SİSTEMİ Bu repo oluşturacağınız yüz verisetlerini tanımaya çalışan ma

Mehdi KOŞACA 2 Dec 30, 2021
Unbiased Learning To Rank Algorithms (ULTRA)

This is an Unbiased Learning To Rank Algorithms (ULTRA) toolbox, which provides a codebase for experiments and research on learning to rank with human annotated or noisy labels.

71 Dec 01, 2022
Pytorch implementation of face attention network

Face Attention Network Pytorch implementation of face attention network as described in Face Attention Network: An Effective Face Detector for the Occ

Hooks 312 Dec 09, 2022
Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression"

beyond-preserved-accuracy Repo for EMNLP 2021 paper "Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression" How to implemen

Kevin Canwen Xu 10 Dec 23, 2022
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

1 Sep 01, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

254 Jan 02, 2023
Examples of how to create colorful, annotated equations in Latex using Tikz.

The file "eqn_annotate.tex" is the main latex file. This repository provides four examples of annotated equations: [example_prob.tex] A simple one ins

SyNeRCyS Research Lab 3.2k Jan 05, 2023
This Deep Learning Model Predicts that from which disease you are suffering.

Deep-Learning-Project This Deep Learning Model Predicts that from which disease you are suffering. This Project Covers the Topics of Deep Learning Int

Jai Viral Doshi 0 Jan 20, 2022
This is the source code for generating the ASL-Skeleton3D and ASL-Phono datasets. Check out the README.md for more details.

ASL-Skeleton3D and ASL-Phono Datasets Generator The ASL-Skeleton3D contains a representation based on mapping into the three-dimensional space the coo

Cleison Amorim 5 Nov 20, 2022
Changing the Mind of Transformers for Topically-Controllable Language Generation

We will first introduce the how to run the IPython notebook demo by downloading our pretrained models. Then, we will introduce how to run our training and evaluation code.

IESL 20 Dec 06, 2022
This is a repository of our model for weakly-supervised video dense anticipation.

Introduction This is a repository of our model for weakly-supervised video dense anticipation. More results on GTEA, Epic-Kitchens etc. will come soon

2 Apr 09, 2022
TensorFlow implementation of AlexNet and its training and testing on ImageNet ILSVRC 2012 dataset

AlexNet training on ImageNet LSVRC 2012 This repository contains an implementation of AlexNet convolutional neural network and its training and testin

Matteo Dunnhofer 161 Nov 25, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

967 Jan 04, 2023
交互式标注软件,暂定名 iann

iann 交互式标注软件,暂定名iann。 安装 按照官网介绍安装paddle。 安装其他依赖 pip install -r requirements.txt 运行 git clone https://github.com/PaddleCV-SIG/iann/ cd iann python iann

294 Dec 30, 2022
Point cloud processing tool library.

Point Cloud ToolBox This point cloud processing tool library can be used to process point clouds, 3d meshes, and voxels. Environment python 3.7.5 Dep

ZhangXinyun 40 Dec 09, 2022
Official implementation for the paper: Generating Smooth Pose Sequences for Diverse Human Motion Prediction

Generating Smooth Pose Sequences for Diverse Human Motion Prediction This is official implementation for the paper Generating Smooth Pose Sequences fo

Wei Mao 28 Dec 10, 2022
A simple implementation of Kalman filter in Multi Object Tracking

kalman Filter in Multi-object Tracking A simple implementation of Kalman filter in Multi Object Tracking 本实现是在https://github.com/liuchangji/kalman-fil

124 Dec 29, 2022
NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"

[Official] FINE Samples for Learning with Noisy Labels This repository is the official implementation of "FINE Samples for Learning with Noisy Labels"

mythbuster 27 Dec 23, 2022
On the Adversarial Robustness of Visual Transformer

On the Adversarial Robustness of Visual Transformer Code for our paper "On the Adversarial Robustness of Visual Transformers"

Rulin Shao 35 Dec 14, 2022