Light-Head R-CNN

Overview

Light-head R-CNN

Introduction

We release code for Light-Head R-CNN.

This is my best practice for my research.

This repo is organized as follows:

light_head_rcnn/
    |->experiments
    |    |->user
    |    |    |->your_models
    |->lib       
    |->tools
    |->output

Main Results

  1. We train on COCO trainval which includes 80k training and 35k validation images. Test on minival which is a 5k subset in validation datasets. Noticing test-dev should be little higher than minival.
  2. We provide some crutial ablation experiments details, and it is easy to diff the difference.
  3. We share our training logs in GoogleDrive output folder, which contains dump models, training loss and speed of each steps. (experiments are done on 8 titan xp, and 2batches/per_gpu. Training should be within one day.)
  4. Because the limitation of the time, extra experiments are comming soon.
Model Name [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
R-FCN, ResNet-v1-101
our reproduce baseline
35.5 54.3 33.8 12.8 34.9 46.1
Light-Head R-CNN
ResNet-v1-101
38.2 60.9 41.0 20.9 42.2 52.8
Light-Head,ResNet-v1-101
+align pooling
39.3 61.0 42.4 22.2 43.8 53.2
Light-Head,ResNet-v1-101
+align pooling + nms0.5
40.0 62.1 42.9 22.5 44.6 54.0

Experiments path related to model:

experiments/lizeming/rfcn_reproduce.ori_res101.coco.baseline
experiments/lizeming/light_head_rcnn.ori_res101.coco 
experiments/lizeming/light_head_rcnn.ori_res101.coco.ps_roialign
experiments/lizeming/light_head_rcnn.ori_res101.coco.ps_roialign

Requirements

  1. tensorflow-gpu==1.5.0 (We only test on tensorflow 1.5.0, early tensorflow is not supported because of our gpu nms implementation)
  2. python3. We recommend using Anaconda as it already includes many common packages. (python2 is not tested)
  3. Python packages might missing. pls fix it according to the error message.

Installation, Prepare data, Testing, Training

Installation

  1. Clone the Light-Head R-CNN repository, and we'll call the directory that you cloned Light-Head R-CNNN as ${lighthead_ROOT}.
git clone https://github.com/zengarden/light_head_rcnn
  1. Compiling
cd ${lighthead_ROOT}/lib;
bash make.sh

Make sure all of your compiling is successful. It may arise some errors, it is useful to find some common compile errors in FAQ

  1. Create log dump directory, data directory.
cd ${lighthead_ROOT};
mkdir output
mkdir data

Prepare data

data should be organized as follows:

data/
    |->imagenet_weights/res101.ckpt
    |->MSCOCO
    |    |->odformat
    |    |->instances_xxx.json
    |    |train2014
    |    |val2014

Download res101 basemodel:

wget -v http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz
tar -xzvf resnet_v1_101_2016_08_28.tar.gz
mv resnet_v1_101.ckpt res101.ckpt

We transfer instances_xxx.json to odformat(object detection format), each line in odformat is an annotation(json) for one image. Our transformed odformat is shared in GoogleDrive odformat.zip .

Testing

  1. Using -d to assign gpu_id for testing. (e.g. -d 0,1,2,3 or -d 0-3 )
  2. Using -s to visualize the results.
  3. Using '-se' to specify start_epoch for testing.

We share our experiments output(logs) folder in GoogleDrive. Download it and place it to ${lighthead_ROOT}, then test our release model.

e.g.

cd experiments/lizeming/light_head_rcnn.ori_res101.coco.ps_roialign
python3 test.py -d 0-7 -se 26

Training

We provide common used train.py in tools, which can be linked to experiments folder.

e.g.

cd experiments/lizeming/light_head_rcnn.ori_res101.coco.ps_roialign
python3 config.py -tool
cp tools/train.py .
python3 train.py -d 0-7

Features

This repo is designed be fast and simple for research. There are still some can be improved: anchor_target and proposal_target layer are tf.py_func, which means it will run on cpu.

Disclaimer

This is an implementation for Light-Head R-CNN, it is worth noting that:

  • The original implementation is based on our internal Platform used in Megvii. There are slight differences in the final accuracy and running time due to the plenty details in platform switch.
  • The code is tested on a server with 8 Pascal Titian XP gpu, 188.00 GB memory, and 40 core cpu.
  • We rewrite a faster nms in our inner platform, while hear we use tf.nms instead.

Citing Light-Head R-CNN

If you find Light-Head R-CNN is useful in your research, pls consider citing:

@article{li2017light,
  title={Light-Head R-CNN: In Defense of Two-Stage Object Detector},
  author={Li, Zeming and Peng, Chao and Yu, Gang and Zhang, Xiangyu and Deng, Yangdong and Sun, Jian},
  journal={arXiv preprint arXiv:1711.07264},
  year={2017}
}

FAQ

  • fatal error: cuda/cuda_config.h: No such file or directory

First, find where is cuda_config.h.

e.g.

find /usr/local/lib/ | grep cuda_config.h

then export your cpath, like:

export CPATH=$CPATH:/usr/local/lib/python3.5/dist-packages/external/local_config_cuda/cuda/
Owner
jemmy li
jemmy li
Poisson Surface Reconstruction for LiDAR Odometry and Mapping

Poisson Surface Reconstruction for LiDAR Odometry and Mapping Surfels TSDF Our Approach Table: Qualitative comparison between the different mapping te

Photogrammetry & Robotics Bonn 305 Dec 21, 2022
Code and data form the paper BERT Got a Date: Introducing Transformers to Temporal Tagging

BERT Got a Date: Introducing Transformers to Temporal Tagging Satya Almasian*, Dennis Aumiller*, and Michael Gertz Heidelberg University Contact us vi

54 Dec 04, 2022
A Runtime method overload decorator which should behave like a compiled language

strongtyping-pyoverload A Runtime method overload decorator which should behave like a compiled language there is a override decorator from typing whi

20 Oct 31, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
PyTorch Implementation of Region Similarity Representation Learning (ReSim)

ReSim This repository provides the PyTorch implementation of Region Similarity Representation Learning (ReSim) described in this paper: @Article{xiao2

Tete Xiao 74 Jan 03, 2023
This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer

OODformer: Out-Of-Distribution Detection Transformer This repo is the official the implementation of the OODformer: Out-Of-Distribution Detection Tran

34 Dec 02, 2022
Custom TensorFlow2 implementations of forward and backward computation of soft-DTW algorithm in batch mode.

Batch Soft-DTW(Dynamic Time Warping) in TensorFlow2 including forward and backward computation Custom TensorFlow2 implementations of forward and backw

19 Aug 30, 2022
Dataset para entrenamiento de yoloV3 para 4 clases

Deteccion de objetos en video Este repo basado en el proyecto PyTorch YOLOv3 para correr detección de objetos sobre video. Construí sobre este proyect

1 Nov 01, 2021
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Codes for "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation"

CSDI This is the github repository for the NeurIPS 2021 paper "CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation

106 Jan 04, 2023
DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

AlexZou 72 Dec 13, 2022
Filtering variational quantum algorithms for combinatorial optimization

Current gate-based quantum computers have the potential to provide a computational advantage if algorithms use quantum hardware efficiently.

1 Feb 09, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
Efficient Training of Visual Transformers with Small Datasets

Official codes for "Efficient Training of Visual Transformers with Small Datasets", NerIPS 2021.

Yahui Liu 112 Dec 25, 2022
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA - Counterfactual And Recourse Library CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the

Carla Recourse 200 Dec 28, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
pyspark🍒🥭 is delicious,just eat it!😋😋

如何用10天吃掉pyspark? 🔥 🔥 《10天吃掉那只pyspark》 🚀

lyhue1991 578 Dec 30, 2022
AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AK-Shanmugananthan 1 Nov 29, 2021
An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow implementation of SERank model. The code is developed based on TF-Ranking.

SERank An efficient and effective learning to rank algorithm by mining information across ranking candidates. This repository contains the tensorflow

Zhihu 44 Oct 20, 2022
3D Generative Adversarial Network

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling This repository contains pre-trained models and sampling

Chengkai Zhang 791 Dec 20, 2022