[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion

Overview

Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion (MiVOS)

Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang

CVPR 2021

[arXiv] [Paper PDF] [Project Page] [Demo] [Papers with Code]

demo1 demo2 demo3

Credit (left to right): DAVIS 2017, Academy of Historical Fencing, Modern History TV

We manage the project using three different repositories (which are actually in the paper title). This is the main repo, see also Mask-Propagation and Scribble-to-Mask.

Overall structure and capabilities

MiVOS Mask-Propagation Scribble-to-Mask
DAVIS/YouTube semi-supervised evaluation ✔️
DAVIS interactive evaluation ✔️
User interaction GUI tool ✔️
Dense Correspondences ✔️
Train propagation module ✔️
Train S2M (interaction) module ✔️
Train fusion module ✔️
Generate more synthetic data ✔️

Framework

framework

Requirements

We used these packages/versions in the development of this project. It is likely that higher versions of the same package will also work. This is not an exhaustive list -- other common python packages (e.g. pillow) are expected and not listed.

Refer to the official PyTorch guide for installing PyTorch/torchvision. The rest can be installed by:

pip install PyQt5 davisinteractive progressbar2 opencv-python networkx gitpython gdown Cython

Quick start

  1. python download_model.py to get all the required models.
  2. python interactive_gui.py --video or python interactive_gui.py --images . A video has been prepared for you at examples/example.mp4.
  3. If you need to label more than one object, additionally specify --num_objects
  4. There are instructions in the GUI. You can also watch the demo videos for some ideas.

Main Results

DAVIS/YouTube semi-supervised results

DAVIS Interactive Track

All results are generated using the unmodified official DAVIS interactive bot without saving masks (--save_mask not specified) and with an RTX 2080Ti. We follow the official protocol.

Precomputed result, with the json summary: [Google Drive] [OneDrive]

eval_interactive_davis.py

Model AUC-J&F J&F @ 60s
Baseline 86.0 86.6
(+) Top-k 87.2 87.8
(+) BL30K pretraining 87.4 88.0
(+) Learnable fusion 87.6 88.2
(+) Difference-aware fusion (full model) 87.9 88.5

Pretrained models

python download_model.py should get you all the models that you need. (pip install gdown required.)

[OneDrive Mirror]

Training

Data preparation

Datasets should be arranged as the following layout. You can use download_datasets.py (same as the one Mask-Propagation) to get the DAVIS dataset and manually download and extract fusion_data ([OneDrive]) and BL30K.

├── BL30K
├── DAVIS
│   └── 2017
│       ├── test-dev
│       │   ├── Annotations
│       │   └── ...
│       └── trainval
│           ├── Annotations
│           └── ...
├── fusion_data
└── MiVOS

BL30K

BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories greedily to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See generation/blender/generate_yaml.py for details.

We noted that using probably half of the data is sufficient to reach full performance (although we still used all), but using less than one-sixth (5K) is insufficient.

Download

You can either use the automatic script download_bl30k.py or download it manually below. Note that each segment is about 115GB in size -- 700GB in total. You are going to need ~1TB of free disk space to run the script (including extraction buffer).

Google Drive is much faster in my experience. Your mileage might vary.

Manual download: [Google Drive] [OneDrive]

Generation

  1. Download ShapeNet.
  2. Install Blender. (We used 2.82)
  3. Download a bunch of background and texture images. We used this repo (we specified "non-commercial reuse" in the script) and the list of keywords are provided in generation/blender/*.json.
  4. Generate a list of configuration files (generation/blender/generate_yaml.py).
  5. Run rendering on the configurations. See here (Not documented in detail, ask if you have a question)

Fusion data

We use the propagation module to run through some data and obtain real outputs to train the fusion module. See the script generate_fusion.py.

Or you can download pre-generated fusion data:

Training commands

These commands are to train the fusion module only.

CUDA_VISIBLE_DEVICES=[a,b] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=2 train.py --id [defg] --stage [h]

We implemented training with Distributed Data Parallel (DDP) with two 11GB GPUs. Replace a, b with the GPU ids, cccc with an unused port number, defg with a unique experiment identifier, and h with the training stage (0/1).

The model is trained progressively with different stages (0: BL30K; 1: DAVIS). After each stage finishes, we start the next stage by loading the trained weight. A pretrained propagation model is required to train the fusion module.

One concrete example is:

Pre-training on the BL30K dataset: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 0 --id retrain_s0

Main training: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 1 --id retrain_s012 --load_network [path_to_trained_s0.pth]

Credit

f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation

ivs-demo: https://github.com/seoungwugoh/ivs-demo

deeplab: https://github.com/VainF/DeepLabV3Plus-Pytorch

STM: https://github.com/seoungwugoh/STM

BlenderProc: https://github.com/DLR-RM/BlenderProc

Citation

Please cite our paper if you find this repo useful!

@inproceedings{MiVOS_2021,
  title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
  author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
  booktitle={CVPR},
  year={2021}
}

Contact: [email protected]

Comments
  • Some problem when train Fusion

    Some problem when train Fusion

    Hello, I encountered some problems when retraining the fusion model. Some key parameter guidelines for training fusion are not given in the code warehouse. Can you provide it? Specifically as follows: (1) generate_fusion.py: parameter "separation" not given

    Can you provide the relevant parameter descriptions of fusion training and the instructions to run so that I can reproduce the results of your paper?

    and when I try to train(python train.py),I meet some code mistake in fusion_dataset.py: (1)are there some mistake When you assign a value to self.vid_to_instance? and It will return error at: self.videos = [v for v in self.videos if v in self.vid_to_instance](line 60 in fusion_datast.py)

    opened by nazimii 12
  • Process killed

    Process killed

    I tried the MIVOS + STCN on a 1.5 minute 4k video that was down sampled to 480p and the program crashed.

    What are the steps to reformat/sample a 4k video to make it work for this tool?

    Also can this tool run on multiple GPUs?

    opened by zdhernandez 11
  • Fine-tune guidance

    Fine-tune guidance

    Hi really loved the work, I'm trying to fine-tune the downloaded models(using the downlaod_model.py) to another domain. I was wondering if you could help me where to put the data and which command to run the training.

    Thank you

    opened by be-redAsmara 8
  • RuntimeError: " not implemented for 'BFloat16'(example.mp4)">

    RuntimeError: "slow_conv_dilated<>" not implemented for 'BFloat16'(example.mp4)

    Hello! I followed the instructions of Quickstart with these settings: python interactive_gui.py --video .\example\example.mp4 As I don't have a GPU, I change the map location to 'CPU'. When I select the "click" radio button and click on the object to create the mask, a runtime error is thrown. image Could you give me some suggestions? Looking forward to your reply.

    opened by xwhkkk 6
  • -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    -- images mem_profile 2 | RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0

    To replicate:

    • create folder with only one image
    • with these settings run: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/
    • select the "click" radio button
    • click on the image to create mask
    • select "scribble" radio button
    • "scribble" an area in the picture
    • runtime error is thrown Screenshot from 2021-12-03 22-06-00
    opened by zdhernandez 5
  • Overlay and Mask files not equal to size of original input image.

    Overlay and Mask files not equal to size of original input image.

    @hkchengrex Doing one image larger than 1k resolution in one folder with command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/

    • clicking on an object to produce the mask
    • click "save" to save the overlay and masks
    • Both overlay and mask files are reduced to a fix resolution of: width: 480px, height: 640px

    Q. Can we keep the size of the output files to be equal to the input size of the original image? Q. Can we add a flag to use either current behavior or preserve the resolution of the input image ?

    opened by zdhernandez 4
  • Getting

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Getting "ValueError: Davis root folder must be named "DAVIS" Error when i try run eval_interactive_davis.py

    Traceback (most recent call last): File "/home/bereket/Desktop/IRCAD-Data/MiVOS/MiVOS-MiVOS-STCN/eval_interactive_davis.py", line 76, in with DavisInteractiveSession(davis_root=davis_path+'/trainval', report_save_dir='../output', max_nb_interactions=8, max_time=8*30) as sess: File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/session/session.py", line 89, in enter samples, max_t, max_i = self.connector.start_session( File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/connector/local.py", line 29, in start_session self.service = EvaluationService(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/evaluation/service.py", line 27, in init self.davis = Davis(davis_root=davis_root) File "/home/bereket/anaconda3/envs/ivos/lib/python3.9/site-packages/davisinteractive/dataset/davis.py", line 93, in init raise ValueError('Davis root folder must be named "DAVIS"') ValueError: Davis root folder must be named "DAVIS"

    opened by be-redAsmara 4
  • Processing on long video with high resolution

    Processing on long video with high resolution

    Hello! Thank you for the amazing framework!

    I have an issue while processing on long video with high resolution. I ran out of GPU memory. As I understand, mivos tries to upload all images directly to GPU and if the video is too long or in high-resolution mivos can't handle such cases. Is there is a way to fix this issue? Maybe modify code to work with data chunks?

    Thank you in advance!

    opened by devidlatkin 4
  • Has anyone met the following problem during the running of

    Has anyone met the following problem during the running of "interactive_gui.py"?

    Traceback (most recent call last): File "interactive_gui.py", line 23, in from PyQt5.QtWidgets import (QWidget, QApplication, QMainWindow, QComboBox, QGridLayout, ImportError: /usr/lib/x86_64-linux-gnu/libQt5Core.so.5: version `Qt_5.15' not found (required by /home/fg/anaconda3/envs/MiVOS/lib/python3.7/site-packages/PyQt5/QtWidgets.abi3.so)

    opened by Starboy-at-earth 4
  • static dataset in download_dataset.py

    static dataset in download_dataset.py

    I note that there are a static dataset in download_dataset.py so, where is this static dataset used?

    and in readme.md, you say, you use BL30K to train fusion model, and the BL30K is very large(600G), so ,you use 600 G dataset to pretrain fusion model?

    opened by nazimii 3
  • Temporal Information

    Temporal Information

    Hi, I am interested in your project and I would like to go in detail for an aspect related to temporal information. Are you training your model on video datasets? Are you getting temporal information from the dataset? or your model has been trained on single images considering only spatial information?

    Thank you so much. Best, Francesca

    opened by FrancescaCi 3
  • CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    CPU profile 2 process throwing CUDA out of memory for one image with multiple items when propagate button is clicked

    @hkchengrex To replicate:

    • load only one image of width(3024 by 4032) in folder./example/test_folder/
    • run command: python interactive_gui.py --mem_profile 2 --images ./example/test_folder/ --resolution -1 --num_objects 4
    • click on one object to create overlay of the first object (red)
    • select num keypad 2 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • select num keypag 3 and click a different object (to produce overlay of different color)
    • click "propagate" Throws error. See picture. Screenshot from 2021-12-11 12-03-39

    Even though I am doing one image if I click "Save" it does what is supposed to to (save overlay and mask). But clicking "Propagate" should not throw and error with cuda when --mem_profile was set to 2, right ? should not have used GPU.

    opened by zdhernandez 7
PyTorch for Semantic Segmentation

PyTorch for Semantic Segmentation This repository contains some models for semantic segmentation and the pipeline of training and testing models, impl

Zijun Deng 1.7k Jan 06, 2023
Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Neural Networks.

Dynamic-Graphs-Construction Official Codes for Graph Modularity:Towards Understanding the Cross-Layer Transition of Feature Representations in Deep Ne

11 Dec 14, 2022
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

THUML @ Tsinghua University 221 Dec 31, 2022
DCGAN LSGAN WGAN-GP DRAGAN PyTorch

Recommendation Our GAN based work for facial attribute editing - AttGAN. News 8 April 2019: We re-implement these GANs by Tensorflow 2! The old versio

Zhenliang He 408 Nov 30, 2022
Mitsuba 2: A Retargetable Forward and Inverse Renderer

Mitsuba Renderer 2 Documentation Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libra

Mitsuba Physically Based Renderer 2k Jan 07, 2023
Auto-Encoding Score Distribution Regression for Action Quality Assessment

DAE-AQA It is an open source program reference to paper Auto-Encoding Score Distribution Regression for Action Quality Assessment. 1.Introduction DAE

13 Nov 16, 2022
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
Neural networks applied in recognizing guitar chords using python, AutoML.NET with C# and .NET Core

Chord Recognition Demo application The demo application is written in C# with .NETCore. As of July 9, 2020, the only version available is for windows

Andres Mauricio Rondon Patiño 24 Oct 22, 2022
Official PyTorch implementation of "IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos", CVPRW 2021

IntegralAction: Pose-driven Feature Integration for Robust Human Action Recognition in Videos Introduction This repo is official PyTorch implementatio

Gyeongsik Moon 29 Sep 24, 2022
Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection

Novel Instances Mining with Pseudo-Margin Evaluation for Few-Shot Object Detection (NimPme) The official implementation of Novel Instances Mining with

12 Sep 08, 2022
Python Interview Questions

Python Interview Questions Clone the code to your computer. You need to understand the code in main.py and modify the content in if __name__ =='__main

ClassmateLin 575 Dec 28, 2022
A Flow-based Generative Network for Speech Synthesis

WaveGlow: a Flow-based Generative Network for Speech Synthesis Ryan Prenger, Rafael Valle, and Bryan Catanzaro In our recent paper, we propose WaveGlo

NVIDIA Corporation 2k Dec 26, 2022
CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks

CALVIN CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks Oier Mees, Lukas Hermann, Erick Rosete,

Oier Mees 107 Dec 26, 2022
Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"

NeRF++ Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields" Work with 360 capture of large-scale unbounded scenes. Sup

Kai Zhang 722 Dec 28, 2022
[Preprint] "Bag of Tricks for Training Deeper Graph Neural Networks A Comprehensive Benchmark Study" by Tianlong Chen*, Kaixiong Zhou*, Keyu Duan, Wenqing Zheng, Peihao Wang, Xia Hu, Zhangyang Wang

Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive Benchmark Study Codes for [Preprint] Bag of Tricks for Training Deeper Graph

VITA 101 Dec 29, 2022
A high performance implementation of HDBSCAN clustering.

HDBSCAN HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates

2.3k Jan 02, 2023
This repository provides the code for MedViLL(Medical Vision Language Learner).

MedViLL This repository provides the code for MedViLL(Medical Vision Language Learner). Our proposed architecture MedViLL is a single BERT-based model

SuperSuperMoon 39 Jan 05, 2023
Hummingbird compiles trained ML models into tensor computation for faster inference.

Hummingbird Introduction Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to se

Microsoft 3.1k Dec 30, 2022
Implementation of ICCV19 Paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network"

OANet implementation Pytorch implementation of OANet for ICCV'19 paper "Learning Two-View Correspondences and Geometry Using Order-Aware Network", by

Jiahui Zhang 225 Dec 05, 2022
Face recognize and crop them

Face Recognize Cropping Module Source 아이디어 Face Alignment with OpenCV and Python Requirement 필요 라이브러리 imutil dlib python-opence (cv2) Usage 사용 방법 open

Cho Moon Gi 1 Feb 15, 2022