Efficient face emotion recognition in photos and videos

Overview

This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficient audiovisual analysis of dynamical changes in emotional state based on information-theoretic approach).

Our approach is described in the arXiv paper published at IEEE SISY 2021. The extended version of this paper is under considereation in the international journal.

All the models were pre-trained for face identification task using VGGFace2 dataset. In order to train PyTorch models, SAM code was borrowed.

We upload several models that obtained the state-of-the-art results for AffectNet dataset. The facial features extracted by these models lead to the state-of-the-art accuracy of face-only models on video datasets from EmotiW 2019, 2020 challenges: AFEW (Acted Facial Expression In The Wild), VGAF (Video level Group AFfect) and EngageWild.

Here are the accuracies measure on the testing set of above-mentioned datasets:

Model AffectNet (8 classes), original AffectNet (8 classes), aligned AffectNet (7 classes), original AffectNet (7 classes), aligned AFEW VGAF
mobilenet_7.h5 - - 64.71 - 55.35 68.92
enet_b0_8_best_afew.pt 60.95 60.18 64.63 64.54 59.89 66.80
enet_b0_8_best_vgaf.pt 61.32 61.03 64.57 64.89 55.14 68.29
enet_b0_7.pt - - 65.74 65.74 56.99 65.18
enet_b2_8.pt 63.025 62.40 66.29 - 57.78 70.23
enet_b2_7.pt - - 65.91 66.34 59.63 69.84

Please note, that we report the accuracies for AFEW and VGAFonly on the subsets, in which MTCNN detects facial regions. The code contains also computation of overall accuracy on the complete testing set, which is slightly lower due to the absence of faces or failed face detection.

In order to run our code on the datasets, please prepare them firstly using our TensorFlow notebooks: train_emotions.ipynb, AFEW_train.ipynb and VGAF_train.ipynb.

If you want to run our mobile application, please, run the following scripts inside mobile_app folder:

python to_tflite.py
python to_pytorchlite.py

Please be sure that EfficientNet models for PyTorch are based on old timm 0.4.5 package, so that exactly tis version should be installed by the following command:

pip install timm==0.4.5
Comments
  • can you share your Manually_Annotated_file cvs files?

    can you share your Manually_Annotated_file cvs files?

    I test affectnet validation data, but get 0.5965 using enet_b2_8.pt. can you share Manually_Annotated_file validation.csv and training.csv to me for debug?

    opened by Dian-Yi 10
  • affectnet march2021 version training script update

    affectnet march2021 version training script update

    As mentioned in #14 , we have different version of affectnet versions. I updated pytorch training script for AffectNet march2021. Two notes are

    • I used horizontal flip for training augmentation,
    • and we have different emotion order in logit.
    opened by sunggukcha 6
  • Confidence range for inference using python library

    Confidence range for inference using python library

    Hi,

    First of all, thank you so much for such a convenient setup to use!

    I'm using the python library face emotion in my code with the model_name = 'enet_b0_8_best_afew'. I was wondering what is the range of the confidence returned by the library or this model in particular. I wasn't able to figure that out.

    Thank you

    opened by varunsingh3000 4
  • Preprocessing of images to run inference

    Preprocessing of images to run inference

    Hello, thank you very much for your work.

    I am trying to preprocess a batch of images (I have my own dataset) the way you prepared your data. I'm following the notebook train_emotions.ipynb as it is in Tensforflow and I'm using that framework.

    I have a question about the steps of the preprocessing, so I would like to ask you if you can tell me the correct steps. These are the steps I'm following, let me know if I'm right or if something is missing:

    1. I already have my images with the faces detected and croppped, i.e, I have a dataset full of faces like this frame9

    2. img = cv2.imread(img_path)

    3. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

    4. img = cv2.resize(img,(224,224))

    5. Then your notebook shows you make a normalization def mobilenet_preprocess_input(x,**kwargs): x[..., 0] -= 103.939 x[..., 1] -= 116.779 x[..., 2] -= 123.68 return x preprocessing_function=mobilenet_preprocess_input

    Here I am having an issue because I cannot cast the subtraction operation between an integer and a float, so I changed it to

    def mobilenet_preprocess_input(x,**kwargs): x[..., 0] = x[..., 0] - 103.939 x[..., 1] = x[..., 1] - 116.779 x[..., 2] = x[..., 2] - 123.68 return x preprocessing_function=mobilenet_preprocess_input

    So, let me know if the process I'm following is correct or if there's something missing.

    Thank you!

    opened by isa-tr 4
  • AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    AttributeError: 'SqueezeExcite' object has no attribute 'gate'

    Excuse me, this problem occurs when using the ‘enet_b2_7.pt’ model to test. I completed it according to the steps you gave, but I really couldn't find the reason for this problem. Do you have any suggestions?

    opened by evercy 4
  • Age gender ethinicity model giving same output for different results

    Age gender ethinicity model giving same output for different results

    `class CNN(object):

    def __init__(self, model_filepath):
    
        self.model_filepath = model_filepath
        self.load_graph(model_filepath = self.model_filepath)
    
    def load_graph(self, model_filepath):
        print('Loading model...')
        self.graph = tf.Graph()
        self.sess = tf.compat.v1.InteractiveSession(graph = self.graph)
    
        with tf.compat.v1.gfile.GFile(model_filepath, 'rb') as f:
            graph_def = tf.compat.v1.GraphDef()
            graph_def.ParseFromString(f.read())
    
        print('Check out the input placeholders:')
        nodes = [n.name + ' => ' +  n.op for n in graph_def.node if n.op in ('Placeholder')]
        for node in nodes:
            print(node)
    
        # Define input tensor
        self.input = tf.compat.v1.placeholder(np.float32, shape = [None, 224, 224, 3], name='input')
        # self.dropout_rate = tf.placeholder(tf.float32, shape = [], name = 'dropout_rate')
    
        tf.import_graph_def(graph_def, {'input_1': self.input})
    
        print('Model loading complete!')
    
        
        # Get layer names
        layers = [op.name for op in self.graph.get_operations()]
        for layer in layers:
            print(layer)
    
    def test(self, data):
    
        # Know your output node name
        output_tensor1,output_tensor2 ,output_tensor3  = self.graph.get_tensor_by_name('import/age_pred/Softmax: 0'),self.graph.get_tensor_by_name('import/gender_pred/Sigmoid: 0'),self.graph.get_tensor_by_name('import/ethnicity_pred/Softmax: 0')
        output = self.sess.run([output_tensor1,output_tensor2 ,output_tensor3], feed_dict = {self.input: data})
    
        return output`
    

    Using this code load "age_gender_ethnicity_224_deep-03-0.13-0.97-0.88.pb" and predict on it. But when predicting on images, every time I am getting same output array.

    [array([[0.01319346, 0.00229602, 0.00176407, 0.00270929, 0.01408699, 0.00574261, 0.00756087, 0.01012164, 0.01221055, 0.01821703, 0.01120028, 0.00936489, 0.01003029, 0.00912451, 0.00813381, 0.00894791, 0.01277262, 0.01034999, 0.01053109, 0.0133063 , 0.01423471, 0.01610439, 0.01528896, 0.01825454, 0.01722076, 0.01933933, 0.01908059, 0.01899827, 0.01919533, 0.0278129 , 0.02204996, 0.02146631, 0.02125309, 0.02146868, 0.02230236, 0.02054285, 0.02096066, 0.01976574, 0.01990371, 0.02064857, 0.01843528, 0.01697922, 0.01610838, 0.01458549, 0.01581902, 0.01377539, 0.01298613, 0.01378927, 0.01191105, 0.01335083, 0.01154454, 0.01118198, 0.01019558, 0.01038121, 0.00920709, 0.00902615, 0.00936321, 0.00969135, 0.00867239, 0.00838663, 0.00797724, 0.00756043, 0.00890809, 0.00758041, 0.00743711, 0.00584346, 0.00555749, 0.00639214, 0.0061864 , 0.00784793, 0.00532241, 0.00567684, 0.00481544, 0.0052173 , 0.00513186, 0.00394571, 0.00415856, 0.00384584, 0.00452774, 0.0041736 , 0.00328163, 0.00327138, 0.00297012, 0.00369216, 0.00284221, 0.00255897, 0.00285459, 0.00232105, 0.00228869, 0.00218005, 0.0021927 , 0.00236659, 0.00233843, 0.00204793, 0.00209861, 0.00231407, 0.00145706, 0.00179674, 0.00186183, 0.00221309]], dtype=float32), array([[0.62949586]], dtype=float32), array([[0.21338916, 0.19771543, 0.19809113, 0.19525865, 0.19554558]], dtype=float32)] Is there something am missing or is this .pb file not meant for predicting?

    opened by sneakatyou 4
  • Provide the validation script/notebook.

    Provide the validation script/notebook.

    Hi,

    I am fond of your works and paper, but I can not find any validation script to validate your result, especially the highest result with efficientNetB2-8 classes-EffectNet.

    Or could you please provide a separate script to pre-process the input images then we can validate the provided weights on your GitHub repository?

    Thank you,

    opened by ltkhang 4
  • A few suggestions.

    A few suggestions.

    Hello!

    I have a couple of ideas:

    1. Could you, please, add text description about difference between models, especially between b0 and b2 general types?
    2. Please consider adding hsemotion-onnx package to the pip repository.
    opened by ioctl-user 3
  • Can not load pretrained models

    Can not load pretrained models

     File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/timm/models/efficientnet_blocks.py", line 47, in forward
        return x * self.gate(x_se)
      File "/Users/xxx/Library/Python/3.8/lib/python/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'SqueezeExcite' object has no attribute 'gate'
    
    opened by DefTruth 3
  • A error when runing codes.

    A error when runing codes.

    When runing AFEW_train.ipynb, an error occured:

    could not broadcast input array from shape (0,112,3) into shape (60,112,3) at facial_anylysis.py line 274 : tmp[dy[k]-1:edy[k],dx[k]-1:edx[k],:] = img[y[k]-1:ey[k],x[k]-1:ex[k],:]

    why dose this occured? could you please fixed it?

    opened by kiva12138 3
  • Valence and arousal

    Valence and arousal

    Hello again! I've read your paper and I've seen that you use the circumplex model's variables arousal and valence. How do those variable appears in the code? I can't find them :( Thank you, Amaia

    opened by AmaiaBiomedicalEngineer 2
  • Question about this work.

    Question about this work.

    Dear Andrey Savchenko,

    I'm a student and going to build a small system to detect student's emotions for my thesis. After finding a solution, I found your job. But I can't run https://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_emotions.ipynb by current AFFECT dataset's version. Please correct me if I'm wrong. My question is: Can I run this workhttps://github.com/HSE-asavchenko/face-emotion-recognition/blob/main/src/affectnet/train_affectnet_march2021_pytorch.ipynb with MobileNet. Because I tend to build small applications to detect emotions from client site then send result to server.

    Many thanks,

    Son Nguyen.

    opened by sonnguyen1996 2
Releases(v0.2.1)
Owner
Andrey Savchenko
Andrey Savchenko
Parris, the automated infrastructure setup tool for machine learning algorithms.

README Parris, the automated infrastructure setup tool for machine learning algorithms. What Is This Tool? Parris is a tool for automating the trainin

Joseph Greene 319 Aug 02, 2022
[TIP2020] Adaptive Graph Representation Learning for Video Person Re-identification

Introduction This is the PyTorch implementation for Adaptive Graph Representation Learning for Video Person Re-identification. Get started git clone h

WuYiming 41 Dec 12, 2022
Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

Prithviraj Dhar 6 Jan 01, 2023
AquaTimer - Programmable Timer for Aquariums based on ATtiny414/814/1614

AquaTimer - Programmable Timer for Aquariums based on ATtiny414/814/1614 AquaTimer is a programmable timer for 12V devices such as lighting, solenoid

Stefan Wagner 4 Jun 13, 2022
Code for ACL 2019 Paper: "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction"

To run a generation experiment (either conceptnet or atomic), follow these instructions: First Steps First clone, the repo: git clone https://github.c

Antoine Bosselut 575 Jan 01, 2023
Vector.ai assignment

fabio-tests-nisargatman Low Level Approach: ###Tables: continents: id*, name, population, area, createdAt, updatedAt countries: id*, name, population,

Ravi Pullagurla 1 Nov 09, 2021
The codes of paper 'Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees'

Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees This project contains the codes of pap

0 Apr 20, 2022
A Simple Framwork for CV Pre-training Model (SOCO, VirTex, BEiT)

A Simple Framwork for CV Pre-training Model (SOCO, VirTex, BEiT)

Sense-GVT 14 Jul 07, 2022
SEJE Pytorch implementation

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
Single cell current best practices tutorial case study for the paper:Luecken and Theis, "Current best practices in single-cell RNA-seq analysis: a tutorial"

Scripts for "Current best-practices in single-cell RNA-seq: a tutorial" This repository is complementary to the publication: M.D. Luecken, F.J. Theis,

Theis Lab 968 Dec 28, 2022
Data visualization app for H&M competition in kaggle

handm_data_visualize_app Data visualization app by streamlit for H&M competition in kaggle. competition page: https://www.kaggle.com/competitions/h-an

Kyohei Uto 12 Apr 30, 2022
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
CONditionals for Ordinal Regression and classification in PyTorch

CONDOR pytorch implementation for ordinal regression with deep neural networks. Documentation: https://GarrettJenkinson.github.io/condor_pytorch About

7 Jul 25, 2022
Referring Video Object Segmentation

Awesome-Referring-Video-Object-Segmentation Welcome to starts ⭐ & comments 💹 & sharing 😀 !! - 2021.12.12: Recent papers (from 2021) - welcome to ad

Explorer 57 Dec 11, 2022
Fully Convolutional DenseNets for semantic segmentation.

Introduction This repo contains the code to train and evaluate FC-DenseNets as described in The One Hundred Layers Tiramisu: Fully Convolutional Dense

485 Nov 26, 2022
Manifold Alignment for Semantically Aligned Style Transfer

Manifold Alignment for Semantically Aligned Style Transfer [Paper] Getting Started MAST has been tested on CentOS 7.6 with python = 3.6. It supports

35 Nov 14, 2022
Project dự đoán giá cổ phiếu bằng thuật toán LSTM gồm: code train và code demo

Web predicts stock prices using Long - Short Term Memory algorithm Give me some start please!!! User interface image: Choose: DayBegin, DayEnd, Stock

Vo Thuong Truong Nhon 8 Nov 11, 2022
Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! Very tiny! Stock Market Financial Technical Analysis Python library . Quant Trading automation or cryptocoin exchange

MyTT Technical Indicators implemented in Python only using Numpy-Pandas as Magic - Very Very Fast! to Stock Market Financial Technical Analysis Python

dev 34 Dec 27, 2022
The openspoor package is intended to allow easy transformation between different geographical and topological systems commonly used in Dutch Railway

Openspoor The openspoor package is intended to allow easy transformation between different geographical and topological systems commonly used in Dutch

7 Aug 22, 2022
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.

Nonuniform-to-Uniform Quantization This repository contains the training code of N2UQ introduced in our CVPR 2022 paper: "Nonuniform-to-Uniform Quanti

Zechun Liu 60 Dec 28, 2022