A trusty face recognition research platform developed by Tencent Youtu Lab

Related tags

Deep LearningTFace
Overview

Introduction

TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training framework and releases our efficient methods implementation.

This framework consists of several modules: 1. various data augmentation methods, 2. backbone model zoo, 3. our proposed methods for face recognition and face quality, 4. test protocols of evalution results and model latency.

Recent News

2021.3: SDD-FIQA: Unsupervised Face Image Quality Assessment with Similarity Distribution Distance accepted by CVPR2021. [paper] [code]

2021.3: Consistent Instance False Positive Improves Fairness in Face Recognition accepted by CVPR2021. [paper] [code]

2021.3: Spherical Confidence Learning for Face Recognition accepted by CVPR2021. [paper] [code]

2020.8: Improving Face Recognition from Hard Samples via Distribution Distillation Loss accepted by ECCV2020. [paper] [code]

2020.3: Curricularface: adaptive curriculum learning loss for deep face recognition has been accepted by CVPR2020. [paper] [code]

Requirements

  • python==3.6.0
  • torch==1.6.0
  • torchvision==0.7.0
  • tensorboard==2.4.0
  • Pillow==5.0.0

Getting Started

Train Data

The training dataset is organized in tfrecord format for efficiency. The raw data of all face images are saved in tfrecord files, and each dataset has a corresponding index file(each line includes tfrecord_name, trecord_index offset, label).

The IndexTFRDataset class will parse the index file to gather image data and label for training. This form of dataset is convenient for reorganization in data cleaning(do not reproduce tfrecord, just reproduce the index file).

  1. Convert raw image to tfrecords, generate a new data dir including some tfrecord files and a index_map file
python3 tools/img2tfrecord.py --img_list=${img_list} --pts_list=${pts_list} --tfrecords_name=${tfr_data_name}
  1. Convert old index file(each line includes image path, label) to new index file
python3 tools/convert_new_index.py --old=${old_index} --tfr_index=${tfr_index} --new=${new_index}
  1. Decode the tfrecords to raw image
python3 tools/decode.py --tfrecords_dir=${tfr_dir} --output_dir=${output_dir}

Augmentation

Data Augmentation module implements some 2D-based methods to generated some hard samples, e.g., maks, glass, headscarf. Details see Augmentation

Train

Modified the DATA_ROOTandINDEX_ROOTin ./tasks/distfc/train_confing.yaml, DATA_ROOT is the parent dir for tfrecord dir, INDEX_ROOT is the parent dir for index file.

bash local_train.sh

Test

Detail codes and steps see Test

Benchmark

Evaluation Results

Backbone Head Data LFW CFP-FP CPLFW AGEDB CALFW IJBB ([email protected]=1e-4) IJBC ([email protected]=1e-4)
IR_101 ArcFace MS1Mv2 99.77 98.27 92.08 98.15 95.45 94.2 95.6
IR_101 CurricularFace MS1Mv2 99.80 98.36 93.13 98.37 96.05 94.86 96.15
IR_18 ArcFace MS1Mv2 99.65 94.89 89.80 97.23 95.60 90.06 92.39
IR_34 ArcFace MS1Mv2 99.80 97.27 91.75 98.07 95.97 92.88 94.65
IR_50 ArcFace MS1Mv2 99.80 97.63 92.50 97.92 96.05 93.45 95.16
MobileFaceNet ArcFace MS1Mv2 99.52 91.66 87.93 95.82 95.12 87.07 89.13
GhostNet_x1.3 ArcFace MS1Mv2 99.65 94.20 89.87 96.95 95.58 89.61 91.96
EfficientNetB0 ArcFace MS1Mv2 99.60 95.90 91.07 97.58 95.82 91.79 93.67
EfficientNetB1 ArcFace MS1Mv2 99.60 96.39 91.75 97.65 95.73 92.43 94.43

Backbone model size & latency

The device and platform information see below:

Device Inference Framework
x86 cpu Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz Openvino
arm Kirin 980 TNN

Test results for different backbones and different devices:

Backbone Model Size(fp32) X86 CPU ARM
EfficientNetB0 16MB 26.29ms 32.09ms
EfficientNetB1 26MB 35.73ms 46.5ms
MobileFaceNet 4.7MB 7.63ms 15.61ms
GhostNet_x1.3 16MB 25.70ms 27.58ms
IR_18 92MB 57.34ms 94.58ms
IR_34 131MB 105.58ms NA
IR_50 167MB 165.95ms NA
IR_101 249MB 215.47ms NA

Acknowledgement

This repo is modified and adapted on these great repositories, we thank theses authors a lot for their greate efforts.

Owner
Tencent
Tencent
Generate images from texts. In Russian. In PaddlePaddle

ruDALL-E PaddlePaddle ruDALL-E in PaddlePaddle. Install: pip install rudalle_paddle==0.0.1rc1 Run with free v100 on AI Studio. Original Pytorch versi

AgentMaker 20 Oct 18, 2022
A PyTorch implementation of "Graph Wavelet Neural Network" (ICLR 2019)

Graph Wavelet Neural Network ⠀⠀ A PyTorch implementation of Graph Wavelet Neural Network (ICLR 2019). Abstract We present graph wavelet neural network

Benedek Rozemberczki 490 Dec 16, 2022
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
Rule Based Classification Project For Python

Rule-Based-Classification-Project (ENG) Business Problem: A game company wants to create new level-based customer definitions (personas) by using some

Deniz Can OĞUZ 4 Oct 29, 2022
Waymo motion prediction challenge 2021: 3rd place solution

Waymo motion prediction challenge 2021: 3rd place solution 📜 Technical report 🗨️ Presentation 🎉 Announcement 🛆Motion Prediction Channel Website 🛆

158 Jan 08, 2023
This is a official repository of SimViT.

SimViT This is a official repository of SimViT. We will open our models and codes about object detection and semantic segmentation soon. Our code refe

ligang 57 Dec 15, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
CS5242_2021 - Neural Networks and Deep Learning, NUS CS5242, 2021

CS5242_2021 Neural Networks and Deep Learning, NUS CS5242, 2021 Cloud Machine #1 : Google Colab (Free GPU) Follow this Notebook installation : https:/

Xavier Bresson 165 Oct 25, 2022
Experiments for distributed optimization algorithms

Network-Distributed Algorithm Experiments -- This repository contains a set of optimization algorithms and objective functions, and all code needed to

Boyue Li 40 Dec 04, 2022
Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, Pattern Recognition

USDAN The implementation of Unified unsupervised and semi-supervised domain adaptation network for cross-scenario face anti-spoofing, which is accepte

11 Nov 03, 2022
Multi-Modal Machine Learning toolkit based on PaddlePaddle.

简体中文 | English PaddleMM 简介 飞桨多模态学习工具包 PaddleMM 旨在于提供模态联合学习和跨模态学习算法模型库,为处理图片文本等多模态数据提供高效的解决方案,助力多模态学习应用落地。 近期更新 2022.1.5 发布 PaddleMM 初始版本 v1.0 特性 丰富的任务

njustkmg 520 Dec 28, 2022
ICLR 2021 i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning

Introduction PyTorch code for the ICLR 2021 paper [i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning]. @inproceedings{lee2021i

Kibok Lee 68 Nov 27, 2022
Improving Compound Activity Classification via Deep Transfer and Representation Learning

Improving Compound Activity Classification via Deep Transfer and Representation Learning This repository is the official implementation of Improving C

NingLab 2 Nov 24, 2021
Easy-to-use library to boost AI inference leveraging state-of-the-art optimization techniques.

NEW RELEASE How Nebullvm Works • Tutorials • Benchmarks • Installation • Get Started • Optimization Examples Discord | Website | LinkedIn | Twitter Ne

Nebuly 1.7k Dec 31, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

235 Jan 05, 2023
Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder

Memory Defense: More Robust Classificationvia a Memory-Masking Autoencoder Authors: - Eashan Adhikarla - Dan Luo - Dr. Brian D. Davison Abstract Many

Eashan Adhikarla 4 Dec 25, 2022
Alpha-Zero - Telegram Group Manager Bot Written In Python Using Pyrogram

✨ Alpha Zero Bot ✨ Telegram Group Manager Bot + Userbot Written In Python Using

1 Feb 17, 2022
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

11 Nov 15, 2022