Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition (AGRA, ACM 2020, Oral)

Overview

Cross Domain Facial Expression Recognition Benchmark

Implementation of papers:

Pipeline

Environment

Ubuntu 16.04 LTS, Python 3.5, PyTorch 1.3

Note: We also provide docker image for this project, click here. (Tag: py3-pytorch1.3-agra)

Datasets

To apply for the AFE, please complete the AFE Database User Agreement and submit it to [email protected] or [email protected].

Note:

  1. The AFE Database Agreement needs to be signed by the faculty member at a university or college and sent it by email.
  2. In order to comply with relevant regulations, you need to apply for the image data of the following data sets by yourself, including CK+, JAFFE, SFEW 2.0, FER2013, ExpW, RAF.

Pre-Train Model

You can download pre-train models in Baidu Drive (password: tzrf) and OneDrive.

Note: To replace backbone of each methods, you should modify and run getPreTrainedModel_ResNet.py (or getPreTrainedModel_MobileNet.py) in the folder where you want to use the method.

Usage

Before run these script files, you should download datasets and pre-train model, and run getPreTrainedModel_ResNet.py (or getPreTrainedModel_MobileNet.py).

Run ICID

cd ICID
bash Train.sh

Run DFA

cd DFA
bash Train.sh

Run LPL

cd LPL
bash Train.sh

Run DETN

cd DETN
bash TrainOnSourceDomain.sh     # Train Model On Source Domain
bash TransferToTargetDomain.sh  # Then, Transfer Model to Target Domain

Run FTDNN

cd FTDNN
bash Train.sh

Run ECAN

cd ECAN
bash TrainOnSourceDomain.sh     # Train Model On Source Domain
bash TransferToTargetDomain.sh  # Then, Transfer Model to Target Domain

Run CADA

cd CADA
bash TrainOnSourceDomain.sh     # Train Model On Source Domain
bash TransferToTargetDomain.sh  # Then, Transfer Model to Target Domain

Run SAFN

cd SAFN
bash TrainWithSAFN.sh

Run SWD

cd SWD
bash Train.sh

Run AGRA

cd AGRA
bash TrainOnSourceDomain.sh     # Train Model On Source Domain
bash TransferToTargetDomain.sh  # Then, Transfer Model to Target Domain

Result

Souce Domain: RAF

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID ResNet-50 74.42 50.70 48.85 53.70 69.54 59.44
DFA ResNet-50 64.26 44.44 43.07 45.79 56.86 50.88
LPL ResNet-50 74.42 53.05 48.85 55.89 66.90 59.82
DETN ResNet-50 78.22 55.89 49.40 52.29 47.58 56.68
FTDNN ResNet-50 79.07 52.11 47.48 55.98 67.72 60.47
ECAN ResNet-50 79.77 57.28 52.29 56.46 47.37 58.63
CADA ResNet-50 72.09 52.11 53.44 57.61 63.15 59.68
SAFN ResNet-50 75.97 61.03 52.98 55.64 64.91 62.11
SWD ResNet-50 75.19 54.93 52.06 55.84 68.35 61.27
Ours ResNet-50 85.27 61.50 56.43 58.95 68.50 66.13

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID ResNet-18 67.44 48.83 47.02 53.00 68.52 56.96
DFA ResNet-18 54.26 42.25 38.30 47.88 47.42 46.02
LPL ResNet-18 72.87 53.99 49.31 53.61 68.35 59.63
DETN ResNet-18 64.19 52.11 42.25 42.01 43.92 48.90
FTDNN ResNet-18 76.74 50.23 49.54 53.28 68.08 59.57
ECAN ResNet-18 66.51 52.11 48.21 50.76 48.73 53.26
CADA ResNet-18 73.64 55.40 52.29 54.71 63.74 59.96
SAFN ResNet-18 68.99 49.30 50.46 53.31 68.32 58.08
SWD ResNet-18 72.09 53.52 49.31 53.70 65.85 58.89
Ours ResNet-18 77.52 61.03 52.75 54.94 69.70 63.19

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID MobileNet V2 57.36 37.56 38.30 44.47 60.64 47.67
DFA MobileNet V2 41.86 35.21 29.36 42.36 43.66 38.49
LPL MobileNet V2 59.69 40.38 40.14 50.13 62.26 50.52
DETN MobileNet V2 53.49 40.38 35.09 45.88 45.26 44.02
FTDNN MobileNet V2 71.32 46.01 45.41 49.96 62.87 55.11
ECAN MobileNet V2 53.49 43.08 35.09 45.77 45.09 44.50
CADA MobileNet V2 62.79 53.05 43.12 49.34 59.40 53.54
SAFN MobileNet V2 66.67 45.07 40.14 49.90 61.40 52.64
SWD MobileNet V2 68.22 55.40 43.58 50.30 60.04 55.51
Ours MobileNet V2 72.87 55.40 45.64 51.05 63.94 57.78

Souce Domain: AFE

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID ResNet-50 56.59 57.28 44.27 46.92 52.91 51.59
DFA ResNet-50 51.86 52.70 38.03 41.93 60.12 48.93
LPL ResNet-50 73.64 61.03 49.77 49.54 55.26 57.85
DETN ResNet-50 56.27 52.11 44.72 42.17 59.80 51.01
FTDNN ResNet-50 61.24 57.75 47.25 46.36 52.89 53.10
ECAN ResNet-50 58.14 56.91 46.33 46.30 61.44 53.82
CADA ResNet-50 72.09 49.77 50.92 50.32 61.70 56.96
SAFN ResNet-50 73.64 64.79 49.08 48.89 55.69 58.42
SWD ResNet-50 72.09 61.50 48.85 48.83 56.22 57.50
Ours ResNet-50 78.57 65.43 51.18 51.31 62.71 61.84

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID ResNet-18 54.26 51.17 47.48 46.44 54.85 50.84
DFA ResNet-18 35.66 45.82 34.63 36.88 62.53 43.10
LPL ResNet-18 67.44 62.91 48.39 49.82 54.51 56.61
DETN ResNet-18 44.19 47.23 45.46 45.39 58.41 48.14
FTDNN ResNet-18 58.91 59.15 47.02 48.58 55.29 53.79
ECAN ResNet-18 44.19 60.56 43.26 46.15 62.52 51.34
CADA ResNet-18 72.09 53.99 48.39 48.61 58.50 56.32
SAFN ResNet-18 68.22 61.50 50.46 50.07 55.17 57.08
SWD ResNet-18 77.52 59.15 50.69 51.84 56.56 59.15
Ours ResNet-18 79.84 61.03 51.15 51.95 65.03 61.80

Methods Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ICID MobileNet V2 55.04 42.72 34.86 39.94 44.34 43.38
DFA MobileNet V2 44.19 27.70 31.88 35.95 61.55 40.25
LPL MobileNet V2 69.77 50.23 43.35 45.57 51.63 52.11
DETN MobileNet V2 57.36 54.46 32.80 44.11 64.36 50.62
FTDNN MobileNet V2 65.12 46.01 46.10 46.69 53.02 51.39
ECAN MobileNet V2 71.32 56.40 37.61 45.34 64.00 54.93
CADA MobileNet V2 70.54 45.07 40.14 46.72 54.93 51.48
SAFN MobileNet V2 62.79 53.99 42.66 46.61 52.65 51.74
SWD MobileNet V2 64.34 53.52 44.72 50.24 55.85 53.73
Ours MobileNet V2 75.19 54.46 47.25 47.88 61.10 57.18

Mean of All Methods

Souce Domain: RAF

Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ResNet-50 75.87 54.30 54.49 54.82 62.09 59.51
ResNet-18 69.43 51.88 47.94 51.72 61.26 56.45
MobileNet V2 60.78 45.15 39.59 47.92 56.46 49.98

Souce Domain: AFE

Backbone CK+ JAFFE SFEW2.0 FER2013 ExpW Mean
ResNet-50 65.41 57.93 47.04 47.26 57.87 55.10
ResNet-18 60.23 56.25 46.95 47.57 58.34 53.87
MobileNet V2 63.57 48.46 40.14 44.91 56.34 50.68

Citation

@article{chen2020cross,
  title={Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning},
  author={Chen, Tianshui and Pu, Tao and Wu, Hefeng and Xie, Yuan and Liu, Lingbo and Lin, Liang},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  pages={1-1},
  doi={10.1109/TPAMI.2021.3131222}
}

@inproceedings{xie2020adversarial,
  title={Adversarial Graph Representation Adaptation for Cross-Domain Facial Expression Recognition},
  author={Xie, Yuan and Chen, Tianshui and Pu, Tao and Wu, Hefeng and Lin, Liang},
  booktitle={Proceedings of the 28th ACM international conference on Multimedia},
  year={2020}
}

Contributors

For any questions, feel free to open an issue or contact us:

Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
RobustVideoMatting and background composing in one model by using onnxruntime.

RVM_onnx_compose RobustVideoMatting and background composing in one model by using onnxruntime. Usage pip install -r requirements.txt python infer_cam

Quantum Liu 4 Apr 07, 2022
MOpt-AFL provided by the paper "MOPT: Optimized Mutation Scheduling for Fuzzers"

MOpt-AFL 1. Description MOpt-AFL is a AFL-based fuzzer that utilizes a customized Particle Swarm Optimization (PSO) algorithm to find the optimal sele

172 Dec 18, 2022
With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function

With this package, you can generate mixed-integer linear programming (MIP) models of trained artificial neural networks (ANNs) using the rectified linear unit (ReLU) activation function. At the momen

ChemEngAI 40 Dec 27, 2022
Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer

VidLanKD Implementation of VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer by Zineng Tang, Jaemin Cho, Hao Tan, Mohi

Zineng Tang 54 Dec 20, 2022
Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation.

Unified-EPT Code for the ICCV 2021 Workshop paper: A Unified Efficient Pyramid Transformer for Semantic Segmentation. Installation Linux, CUDA=10.0,

29 Aug 23, 2022
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 09, 2023
AEI: Actors-Environment Interaction with Adaptive Attention for Temporal Action Proposals Generation

AEI: Actors-Environment Interaction with Adaptive Attention for Temporal Action Proposals Generation A pytorch-version implementation codes of paper:

11 Dec 13, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 07, 2023
Housing Price Prediction

This project aim was to predict the price of houses in the Boston area during the great financial crisis through regression, as well as classify houses into different quality categories according to

Florian Klement 1 Jan 27, 2022
An implementation for the ICCV 2021 paper Deep Permutation Equivariant Structure from Motion.

Deep Permutation Equivariant Structure from Motion Paper | Poster This repository contains an implementation for the ICCV 2021 paper Deep Permutation

72 Dec 27, 2022
PlenOctrees: NeRF-SH Training & Conversion

PlenOctrees Official Repo: NeRF-SH training and conversion This repository contains code to train NeRF-SH and to extract the PlenOctree, constituting

Alex Yu 323 Dec 29, 2022
CVPR 2021

Smoothing the Disentangled Latent Style Space for Unsupervised Image-to-image Translation [Paper] | [Poster] | [Codes] Yahui Liu1,3, Enver Sangineto1,

Yahui Liu 37 Sep 12, 2022
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

ChongjianGE 89 Dec 02, 2022
Python package for visualizing the loss landscape of parameterized quantum algorithms.

orqviz A Python package for easily visualizing the loss landscape of Variational Quantum Algorithms by Zapata Computing Inc. orqviz provides a collect

Zapata Computing, Inc. 75 Dec 30, 2022
An expansion for RDKit to read all types of files in one line

RDMolReader An expansion for RDKit to read all types of files in one line How to use? Add this single .py file to your project and import MolFromFile(

Ali Khodabandehlou 1 Dec 18, 2021
An original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi

MetaICL: Learning to Learn In Context This includes an original implementation of "MetaICL: Learning to Learn In Context" by Sewon Min, Mike Lewis, Lu

Meta Research 141 Jan 07, 2023
An open source Jetson Nano baseboard and tools to design your own.

My Jetson Nano Baseboard This basic baseboard gives the user the foundation and the flexibility to design their own baseboard for the Jetson Nano. It

NVIDIA AI IOT 57 Dec 29, 2022
Source Code For Template-Based Named Entity Recognition Using BART

Template-Based NER Source Code For Template-Based Named Entity Recognition Using BART Training Training train.py Inference inference.py Corpus ATIS (h

174 Dec 19, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022