NeurIPS 2021 Datasets and Benchmarks Track

Related tags

Deep LearningAP-10K
Overview

AP-10K: A Benchmark for Animal Pose Estimation in the Wild

Introduction | Updates | Overview | Download | Training Code | Key Questions | License

Introduction

This repository is the official reporisity of AP-10K: A Benchmark for Animal Pose Estimation in the Wild (NeurIPS 2021 Datasets and Benchmarks Track). It contains the introduction, annotation files, and code for the dataset AP-10K, which is the first large-scale dataset for general animal pose estimation. AP-10K consists of 10,015 images collected and filtered from 23 animal families and 54 species, with high-quality keypoint annotations. We also contain another about 50k images with family and species labels. The dataset can be used for supervised learning, cross-domain transfer learning, and intra- and inter-family domain. It can also be used in self-supervised learning, semi-supervised learning, etc. The annotation files are provided following the COCO style.

Updates

01/11/2021 We have uploaded the corresponding code and pretrained models for the usage of AP-10K dataset!

01/11/2021 We have updated the dataset! It now has 54 species for training!

01/11/2021 The AP-10K dataset is integrated into mmpose! Please enjoy it!

11/10/2021 The paper is accepted to NeurIPS 2021 Datasets and Benchmarks Track!

31/08/2021 The paper is post on arxiv! We have uploaded the annotation file!

Overview

keypoint definition

Keypoint Description Keypoint Description
1 Left Eye 2 Right Eye
3 Nose 4 Neck
5 Root of Tail 6 Left Shoulder
7 Left Elbow 8 Left Front Paw
9 Right Shoulder 10 Right Elbow
11 Right Front Paw 12 Left Hip
13 Left Knee 14 Left Back Paw
15 Right Hip 16 Right Knee
17 Right Back Paw

Annotations Overview

Image Background

Id Background type Id Background type
1 grass or savanna 2 forest or shrub
3 mud or rock 4 snowfield
5 zoo or human habitation 6 swamp or rivderside
7 desert or gobi 8 mugshot

Download

The dataset and corresponding files can be downloaded from

[Google Drive] [Baidu Pan] (code: 6uz6)

(Optional) The full version with both labeled and unlabeled images can be downloaded with the script provided here

[Google Drive] [Baidu Pan] (code: 5lxi)

Training Code

Here we provide the example of training models with the AP-10K dataset. The code is based on the mmpose project.

Installation

Please refer to install.md for Installation.

Dataset Preparation

Please download the dataset from the Download Section, and please extract the dataset under the data folder, e.g.,

mkdir data
unzip ap-10k.zip -d data/
mv data/ap-10k data/ap10k

The extracted dataset should be looked like:

AP-10K
├── mmpose
├── docs
├── tests
├── tools
├── configs
|── data
    │── ap10k
        │-- annotations
        │   │-- ap10k-train-split1.json
        │   |-- ap10k-train-split2.json
        │   |-- ap10k-train-split3.json
        │   │-- ap10k-val-split1.json
        │   |-- ap10k-val-split2.json
        │   |-- ap10k-val-split3.json
        │   |-- ap10k-test-split1.json
        │   |-- ap10k-test-split2.json
        │   |-- ap10k-test-split3.json
        │-- data
        │   │-- 000000000001.jpg
        │   │-- 000000000002.jpg
        │   │-- ...

Inference

The checkpoints can be downloaded from HRNet-w32, HRNet-w48, ResNet-50, ResNet-101.

python tools/test.py <CONFIG_FILE> <DET_CHECKPOINT_FILE>

Training

bash tools/dist_train.sh <CONFIG_FILE> <GPU_NUM>

For example, to train the HRNet-w32 model with 1 GPU, please run:

bash tools/dist_train.sh configs/animal/2d_kpt_sview_rgb_img/topdown_heatmap/ap10k/hrnet_w32_ap10k_256x256.py 1

Key Questions

1. For what purpose was the dataset created?

AP-10K is created to facilitate research in the area of animal pose estimation. It is important to study several challenging questions in the context of more training data from diverse species are available, such as:

  1. how about the performance of different representative human pose models on the animal pose estimation task?
  2. will the representation ability of a deep model benefit from training on a large-scale dataset with diverse species?
  3. how about the impact of pretraining, e.g., on the ImageNet dataset or human pose estimation dataset, in the context of the large-scale of dataset with diverse species?
  4. how about the intra and inter family generalization ability of a model trained using data from specific species or family?

However, previous datasets for animal pose estimation contain limited number of animal species. Therefore, it is impossible to study these questions using existing datasets as they contains at most 5 species, which is far from enough to get sound conclusion. By contrast, AP-10K has 23 family and 54 species and thus can help researchers to study these questions.

2. Was any cleaning of the data done?

We removed replicated images by using aHash algorithm to detect similar images and manually checking. Images with heavy occlusion and logos were removed manually. The cleaned images were categorized into diifferent species and family.

3. How were the keypoints instructed to be labeled?

Annotators first learned about the physiognomy, body structure and distribution of keypoints of the animals. Then, five images of each species were presented to annotators to annotate keypoints, which were used to assess their annotation quality. Annotators with good annotation quality were further trained on how to deal with the partial absence of the body due to occlusion and were involved in the subsequent annotation process. Annotators were asked to annotate all visible keypoints. For the occluded keypoints, they were asked to annotate keypoints whose location they could estimate based on body plan, pose, and the symmetry property of the body, where the length of occluded limbs or the location of occluded keypoints could be inferred from the visible limbs or keypoints. Other keypoints were left unlabeled.

To guarantee the annotation quality, we have adopted a sequential labeling strategy. Three rounds of cross-check and correction are conducted with both manual check and automatic check (according to specific rules, \eg, keypoints belonging to an instance are in the same bounding box) to reduce the possibility of mislabeling. To begin with, annotators labeled keypoints of each instance and submited a version-1 labels to senior well-trained annotators, and then senior well-trained annotators checked the quality of the version-1 labels and returned an error list to annotators, annotators would fix these errors according to it. Finally, annotators submited a fixed version-2 labels to senior well-trained annotator and they did the last correction to find any potential mislabeled keypoints. After all three rounds of work had been done, a release-version of dataset with high-quality labels was finished.

4. Unity of keypoint and difference of walk type

If we only follow the biology and define the keypoints by the position of the bones, the actual labeled keypoint maybe hard, even invisible for labeling and which look like inharmonious with animal’s movement. Ungulates (or other unguligrade animals) mainly rely on their toes in movement, with their paws, ankles, and knees observable. Compared with these keypoints, the actual hips are less distinctive and difficult to annotate since they are hidden in their body. A similar phenomenon can also be observed in digitigrade animals. On the other hand, plantigrade animals always walk with metatarsals (paws) flat on the ground, with their paws, knees, and hips more distinguishable in movement. Thus, we denote the paws, ankles, and knees for the unguligrade and digitigrade animals, and the paws, knees, and hips for the plantigrade animals. For simplicity, we use 'hip' to denote the knees for unguligrade and digitigrade animals and 'knee' for their ankles. For plantigrade animals, the annotation is the same as the biology definition. Thus, the visual distribution of keypoints is similar across the dataset, as the 'knee' is around the middle of the limbs for all animals.

5. What tasks could the dataset be used for?

AP-10K can be used for the research of animal pose estimation. Besides, it can also be used for specific machine learning topics such as few-shot learning, domain generalization, self-supervised learning. Please see the Discussion part in the paper.

License

The dataset follows CC-BY-4.0 license.

Owner
AP-10K
AP-10K
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
Fast EMD for Python: a wrapper for Pele and Werman's C++ implementation of the Earth Mover's Distance metric

PyEMD: Fast EMD for Python PyEMD is a Python wrapper for Ofir Pele and Michael Werman's implementation of the Earth Mover's Distance that allows it to

William Mayner 433 Dec 31, 2022
v objective diffusion inference code for PyTorch.

v-diffusion-pytorch v objective diffusion inference code for PyTorch, by Katherine Crowson (@RiversHaveWings) and Chainbreakers AI (@jd_pressman). The

Katherine Crowson 635 Dec 30, 2022
SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation

SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation SeqFormer SeqFormer: a Frustratingly Simple Model for Video Instance Segmentat

Junfeng Wu 298 Dec 22, 2022
Locally cache assets that are normally streamed in POPULATION: ONE

Population One Localizer This is no longer needed as of the build shipped on 03/03/22, thank you bigbox :) Locally cache assets that are normally stre

Ahman Woods 2 Mar 04, 2022
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue. How do I cite D-REX? For now, cite

Alon Albalak 6 Mar 31, 2022
git git《Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking》(CVPR 2021) GitHub:git2] 《Masksembles for Uncertainty Estimation》(CVPR 2021) GitHub:git3]

Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li Accepted by CVPR

NingWang 236 Dec 22, 2022
Object detection evaluation metrics using Python.

Object detection evaluation metrics using Python.

Louis Facun 2 Sep 06, 2022
[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

CC 4.4k Dec 27, 2022
MGFN: Multi-Graph Fusion Networks for Urban Region Embedding was accepted by IJCAI-2022.

Multi-Graph Fusion Networks for Urban Region Embedding (IJCAI-22) This is the implementation of Multi-Graph Fusion Networks for Urban Region Embedding

202 Nov 18, 2022
Repository For Programmers Seeking a platform to show their skills

Programming-Nerds Repository For Programmers Seeking Pull Requests In hacktoberfest ❓ What's Hacktoberfest 2021? Hacktoberfest is the easiest way to g

42 Oct 29, 2022
Unofficial PyTorch Implementation of Multi-Singer

Multi-Singer Unofficial PyTorch Implementation of Multi-Singer: Fast Multi-Singer Singing Voice Vocoder With A Large-Scale Corpus. Requirements See re

SunMail-hub 123 Dec 28, 2022
This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”

This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?” Usage To replicate our results in Secti

Albert Webson 64 Dec 11, 2022
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
Code for Massive-scale Decoding for Text Generation using Lattices

Massive-scale Decoding for Text Generation using Lattices Jiacheng Xu, Greg Durrett TL;DR: a new search algorithm to construct lattices encoding many

Jiacheng Xu 37 Dec 18, 2022
Company clustering with K-means/GMM and visualization with PCA, t-SNE, using SSAN relation extraction

RE results graph visualization and company clustering Installation pip install -r requirements.txt python -m nltk.downloader stopwords python3.7 main.

Jieun Han 1 Oct 06, 2022
Bare bones use-case for deploying a containerized web app (built in streamlit) on AWS.

Containerized Streamlit web app This repository is featured in a 3-part series on Deploying web apps with Streamlit, Docker, and AWS. Checkout the blo

Collin Prather 62 Jan 02, 2023
Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX

ONNX-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in ONNX Stereo depth estimation on the cone

Ibai Gorordo 23 Nov 29, 2022
A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Monte Carlo Simulation to the Paper A High-Level Fusion Scheme for Circular Quantities published at the 20th International Conference on Advanced Robotics

Sören Kohnert 0 Dec 06, 2021
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022