Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

Overview

OpenDet

Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022)
Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-Song Xia.
arXiv preprint.

OpenDet2: OpenDet is implemented based on detectron2.

Setup

The code is based on detectron2 v0.5.

  • Installation

Here is a from-scratch setup script.

conda create -n opendet2 python=3.8 -y
conda activate opendet2

conda install pytorch=1.8.1 torchvision cudatoolkit=10.1 -c pytorch -y
pip install detectron2==0.5 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.8/index.html
git clone https://github.com/csuhan/opendet2.git
cd opendet2
pip install -v -e .
  • Prepare datasets

Please follow datasets/README.md for dataset preparation. Then we generate VOC-COCO datasets.

bash datasets/opendet2_utils/prepare_openset_voc_coco.sh
# using data splits provided by us.
cp datasets/voc_coco_ann datasets/voc_coco -rf

Model Zoo

We report the results on VOC and VOC-COCO-20, and provide pretrained models. Please refer to the corresponding log file for full results.

  • Faster R-CNN
Method backbone mAPK↑(VOC) WI AOSE mAPK↑ APU↑ Download
FR-CNN R-50 80.06 19.50 16518 58.36 0 config model
PROSER R-50 79.42 20.44 14266 56.72 16.99 config model
ORE R-50 79.80 18.18 12811 58.25 2.60 config model
DS R-50 79.70 16.76 13062 58.46 8.75 config model
OpenDet R-50 80.02 12.50 10758 58.64 14.38 config model
OpenDet Swin-T 83.29 10.76 9149 63.42 16.35 config model
  • RetinaNet
Method mAPK↑(VOC) WI AOSE mAPK↑ APU↑ Download
RetinaNet 79.63 14.16 36531 57.32 0 config model
Open-RetinaNet 79.64 10.74 17208 57.32 10.55 config model

Note:

  • You can also download the pre-trained models at github release or BaiduYun with extracting code ABCD.
  • The above results are reimplemented. Therefore, they are slightly different from our paper.
  • The official code of ORE is at OWOD. So we do not plan to include ORE in our code.

Online Demo

Try our online demo at huggingface space.

Train and Test

  • Testing

First, you need to download pretrained weights in the model zoo, e.g., OpenDet.

Then, run the following command:

python tools/train_net.py --num-gpus 8 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml \
        --eval-only MODEL.WEIGHTS output/faster_rcnn_R_50_FPN_3x_opendet/model_final.pth
  • Training

The training process is the same as detectron2.

python tools/train_net.py --num-gpus 8 --config-file configs/faster_rcnn_R_50_FPN_3x_opendet.yaml

To train with the Swin-T backbone, please download swin_tiny_patch4_window7_224.pth and convert it to detectron2's format using tools/convert_swin_to_d2.py.

wget https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth
python tools/convert_swin_to_d2.py swin_tiny_patch4_window7_224.pth swin_tiny_patch4_window7_224_d2.pth

Citation

If you find our work useful for your research, please consider citing:

@InProceedings{han2022opendet,
    title     = {Expanding Low-Density Latent Regions for Open-Set Object Detection},
    author    = {Han, Jiaming and Ren, Yuqiang and Ding, Jian and Pan, Xingjia and Yan, Ke and Xia, Gui-Song},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2022}
}
Comments
  • 关于论文中图2 tsne可视化的问题 ’

    关于论文中图2 tsne可视化的问题 ’

    @csuhan 您好,我想请教论文中的一个点。 Figure 2. t-SNE visualization of latent features. 这里提到彩色的点是VOC类别(已知),而黑色三角点则是非VOC类别(未知,取自COCO)

    在您的工作中,您将未知类别设定为数量为1的1个类别(而不是更多数量),这样训出来的模型,我们就很自然地认为,未知类别也会聚成一个簇,就像图2 (b)中一样。同时还有一些离散点,分散在各个已知类别簇中。

    但是实际上来看,1个未知类别,它应该蕴含了众多潜在的类别,例如COCO类别数量-VOC类别数量=80-20=60,也就是1个未知类别可能就蕴含了潜在的60个类别。而将60个类别的特征聚集在了一起,形成了图2 (b),这是不是有点奇怪?也就是想问,只有1个未知类的类中心,是不是不太合理?

    想请教您的看法,谢谢!

    opened by ChibisukeDragon 4
  • Question about loss_cls_ic

    Question about loss_cls_ic

    Nice job! I am trying to reproduce your work. But I find that loss_cls_ic is 0 for most of the time after the training started. Is it normal? (I set batch_size=4 because of the limited computational resources.) Thanks.

    opened by Yifei-Y 3
  • error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].

    error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].

    When I followed the README to install opendet2, I got some trouble. Here is my command. I have several rtx3090 gpus.

    # CUDA V11.1
    # torch 1.9.0
    # python 3.8
    conda create -n opendet2 python=3.8 -y
    conda activate opendet2
    # get pytorch 1.9.0. I got RuntimeError: CUDA error: device-side assert triggered when using torch 1.8.1
    pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
    # build opencv
    pip install opencv-python
    pip install opencv-contrib-python
    # build detectron2. DO NOT build detectron2 from latest SOURCE. In the latest version, some named methods have been removed.
    pip install detectron2==0.5 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.9/index.html
    # build opendet2
    cd opendet2
    pip install -v -e .
    

    When I run the last command pip install -v -e ., I got these error message:

    (opendet2) [email protected]:~/opendet/opendet2$ pip install -v -e .
    Using pip 21.2.4 from /home/yupeng/anaconda3/envs/opendet2/lib/python3.8/site-packages/pip (python 3.8)
    Looking in indexes: https://mirrors.bfsu.edu.cn/pypi/web/simple/
    Obtaining file:///home/yupeng/opendet/opendet2
        Running command python setup.py egg_info
        error: Multiple top-level packages discovered in a flat-layout: ['demo', 'configs', 'opendet2', 'datasets', 'detectron2'].
    
        To avoid accidental inclusion of unwanted files or directories,
        setuptools will not proceed with this build.
    
        If you are trying to create a single distribution with multiple packages
        on purpose, you should not rely on automatic discovery.
        Instead, consider the following options:
    
        1. set up custom discovery (`find` directive with `include` or `exclude`)
        2. use a `src-layout`
        3. explicitly set `py_modules` or `packages` with a list of names
    
        To find more information, look for "package discovery" on setuptools docs.
    WARNING: Discarding file:///home/yupeng/opendet/opendet2. Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
    (opendet2) [email protected]:~/opendet/opendet2$
    

    I could use pip install setuptools==58.2.0 and retry pip install -v -e ., then everything is fine. It seems there are some problems by using the lateset setuptools>=61.0. Maybe you can find more information in the link below: https://github.com/pypa/setuptools/issues/3197 https://github.com/pypa/setuptools/issues/3227 https://github.com/facebookresearch/detectron2/issues/3943 https://github.com/facebookresearch/detectron2/issues/3811 Good luck.

    opened by ChibisukeDragon 2
  • 8GPU训练发生死锁

    8GPU训练发生死锁

    使用基本的resnet backbone的faster rcnn会发生死锁。我简单的把Base_RCNN_FPN.yaml换成了detectron2中的Base_RCNN_C4.yaml。 使用readme中示例代码训练时卡在训练第一个batch的地方,GPU占用率100%,但是显存只占了2400M,一夜过去14小时还是卡在该位置,没有任何输出或报错。改为单GPU训练正常,可以提供一些帮助吗?

    opened by buaali 0
  • CUDA error: device-side assert triggered

    CUDA error: device-side assert triggered

    When I run train_net.py, I get this issue after loading R-50.pkl. I want to know how to solve that. Thanks a lot. My environment: CUDA 11.1, python 3.7, torch 1.8.1

    opened by Millielele 0
  • 对bias参数weight_decay的处理

    对bias参数weight_decay的处理

    opendet2/solver/build.py line 39: 注释和代码不符,注释中bias的weight_decay为默认值,实际代码中被设为None,导致以下错误: TypeError: add(): argument 'alpha' must be Number, not NoneType

    opened by sunxuhao 2
  • Reproducibility issue

    Reproducibility issue

    Hi,

    Amazing work on open-set detection! I trained the model after doing the dataset separation steps you suggest, and with exact same configs. The only difference is that I used 1 GPU instead of 8 GPUs, and these are the results I obtained. Interestingly, WI and AOSE metrics are worse, but AP is better. Do you think this much difference is expected just from using fewer GPUs, or is there some other issue I need to look for? Thanks in advance.

    VOC-COCO-20 Result | WI ↓ | AOSE ↓ | AP u↑ -- | -- | -- | -- Paper | 14.95 | 11286 | 14.93 Reproduced | 20.68 | 13370 | 21.36

    VOC-COCO-0.5n Result | WI ↓ | AOSE ↓ | AP u↑ -- | -- | -- | -- Paper | 6.44 | 3944 | 9.05 Reproduced | 55 | 5369 | 18.09

    opened by misraya 3
  • Running command python setup.py egg_info     error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    Running command python setup.py egg_info error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    Running command python setup.py egg_info error: Multiple top-level packages discovered in a flat-layout: ['data', 'engine', 'solver', 'config', 'modeling', 'evaluation'].

    opened by roywang021 1
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
Referring Video Object Segmentation

Awesome-Referring-Video-Object-Segmentation Welcome to starts ⭐ & comments 💹 & sharing 😀 !! - 2021.12.12: Recent papers (from 2021) - welcome to ad

Explorer 57 Dec 11, 2022
Optimizaciones incrementales al problema N-Body con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámbito de HPC.

Python HPC Optimizaciones incrementales de N-Body (all-pairs) con el fin de evaluar y comparar las prestaciones de los traductores de Python en el ámb

Andrés Milla 12 Aug 04, 2022
Implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networks, using PyTorch

C-CNN: Contourlet Convolutional Neural Networks This repo implemenets the Contourlet-CNN as described in C-CNN: Contourlet Convolutional Neural Networ

Goh Kun Shun (KHUN) 10 Nov 03, 2022
A Factor Model for Persistence in Investment Manager Performance

Factor-Model-Manager-Performance A Factor Model for Persistence in Investment Manager Performance I apply methods and processes similar to those used

Omid Arhami 1 Dec 01, 2021
Reference code for the paper CAMS: Color-Aware Multi-Style Transfer.

CAMS: Color-Aware Multi-Style Transfer Mahmoud Afifi1, Abdullah Abuolaim*1, Mostafa Hussien*2, Marcus A. Brubaker1, Michael S. Brown1 1York University

Mahmoud Afifi 36 Dec 04, 2022
Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral)

Not All Points Are Equal: Learning Highly Efficient Point-based Detectors for 3D LiDAR Point Clouds (CVPR 2022, Oral) This is the official implementat

Yifan Zhang 259 Dec 25, 2022
Deep Residual Learning for Image Recognition

Deep Residual Learning for Image Recognition This is a Torch implementation of "Deep Residual Learning for Image Recognition",Kaiming He, Xiangyu Zhan

Kimmy 561 Dec 01, 2022
Code for Referring Image Segmentation via Cross-Modal Progressive Comprehension, CVPR2020.

CMPC-Refseg Code of our CVPR 2020 paper Referring Image Segmentation via Cross-Modal Progressive Comprehension. Shaofei Huang*, Tianrui Hui*, Si Liu,

spyflying 55 Dec 01, 2022
RoboDesk A Multi-Task Reinforcement Learning Benchmark

RoboDesk A Multi-Task Reinforcement Learning Benchmark If you find this open source release useful, please reference in your paper: @misc{kannan2021ro

Google Research 66 Oct 07, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
Official code for Score-Based Generative Modeling through Stochastic Differential Equations

Score-Based Generative Modeling through Stochastic Differential Equations This repo contains the official implementation for the paper Score-Based Gen

Yang Song 818 Jan 06, 2023
Examples of how to create colorful, annotated equations in Latex using Tikz.

The file "eqn_annotate.tex" is the main latex file. This repository provides four examples of annotated equations: [example_prob.tex] A simple one ins

SyNeRCyS Research Lab 3.2k Jan 05, 2023
PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identification in Symbolic Scores.

Symbolic Melody Identification This repository is an unofficial PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identifica

Sophia Y. Chou 3 Feb 21, 2022
CAUSE: Causality from AttribUtions on Sequence of Events

CAUSE: Causality from AttribUtions on Sequence of Events

Wei Zhang 21 Dec 01, 2022
Program your own vulkan.gpuinfo.org query in Python. Used to determine baseline hardware for WebGPU.

query-gpuinfo-data License This software is not presently released under a license. The data in data/ is obtained under CC BY 4.0 as specified there.

Kai Ninomiya 5 Jul 18, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023