✂️ EyeLipCropper is a Python tool to crop eyes and mouth ROIs of the given video.

Overview

EyeLipCropper

EyeLipCropper is a Python tool to crop eyes and mouth ROIs of the given video. The whole process consists of three parts: frame extraction, face alignment, and eye/mouth cropping. The cropped eye/mouth image size can be customized.

vis

Usage

Prerequisites

>>> pip install -r requirements.txt

1. Extract frames of a given video

>>> python frame_extract.py -h
usage: frame_extract.py [-h] [--video-path VIDEO_PATH] [--images-path IMAGES_PATH]

extract frames with opencv

optional arguments:
  -h, --help            show this help message and exit
  --video-path VIDEO_PATH
                        the input video path
  --images-path IMAGES_PATH
                        the output frames path
 
# default for test: this will generate frames of the video in `./test/images`
>>> python frame_extract.py

2. Align faces of the frames, with library face-alignment

>>> python face_align.py -h
usage: face_align.py [-h] [--images-path IMAGES_PATH] [--landmarks-path LANDMARKS_PATH] [--boxes-path BOXES_PATH] [--device DEVICE] [--log-path LOG_PATH]

align faces with `https://github.com/1adrianb/face-alignment`

optional arguments:
  -h, --help            show this help message and exit
  --images-path IMAGES_PATH
                        the input frames path
  --landmarks-path LANDMARKS_PATH
                        the output 68 landmarks path
  --boxes-path BOXES_PATH
                        the output bounding boxes path
  --device DEVICE       cpu or gpu cuda device
  --log-path LOG_PATH   logging when there are no faces detected
  
# default for test: this will generate landmarks and bounding boxes in
# `./test/landmarks` and `./test/boxes`
>>> python face_align.py

3. Crop left eye, right eye, mouth ROIs, with code modified from processing tools of [Eye] RT-GENE and [Mouth] LipForensics

>>> python eye_mouth_crop.py -h
usage: eye_mouth_crop.py [-h] [--images-path IMAGES_PATH] [--landmarks-path LANDMARKS_PATH] [--boxes-path BOXES_PATH] [--eye-width EYE_WIDTH] [--eye-height EYE_HEIGHT]
                         [--face-roi-width FACE_ROI_WIDTH] [--face-roi-height FACE_ROI_HEIGHT] [--left-eye-path LEFT_EYE_PATH] [--right-eye-path RIGHT_EYE_PATH]
                         [--mean-face MEAN_FACE] [--mouth-width MOUTH_WIDTH] [--mouth-height MOUTH_HEIGHT] [--start-idx START_IDX] [--stop-idx STOP_IDX]
                         [--window-margin WINDOW_MARGIN] [--mouth-path MOUTH_PATH]

crop eye and mouth regions

optional arguments:
  -h, --help            show this help message and exit
  --images-path IMAGES_PATH
                        [COMMON] the input frames path
  --landmarks-path LANDMARKS_PATH
                        [COMMON] the input 68 landmarks path
  --boxes-path BOXES_PATH
                        [EYE] the input bounding boxes path
  --eye-width EYE_WIDTH
                        [EYE] width of cropped eye ROIs
  --eye-height EYE_HEIGHT
                        [EYE] height of cropped eye ROIs
  --face-roi-width FACE_ROI_WIDTH
                        [EYE] maximize this argument until there is a warning message
  --face-roi-height FACE_ROI_HEIGHT
                        [EYE] maximize this argument until there is a warning message
  --left-eye-path LEFT_EYE_PATH
                        [EYE] the output left eye images path
  --right-eye-path RIGHT_EYE_PATH
                        [EYE] the output right eye images path
  --mean-face MEAN_FACE
                        [MOUTH] mean face pathname
  --mouth-width MOUTH_WIDTH
                        [MOUTH] width of cropped mouth ROIs
  --mouth-height MOUTH_HEIGHT
                        [MOUTH] height of cropped mouth ROIs
  --start-idx START_IDX
                        [MOUTH] start of landmark index for mouth
  --stop-idx STOP_IDX   [MOUTH] end of landmark index for mouth
  --window-margin WINDOW_MARGIN
                        [MOUTH] window margin for smoothed_landmarks
  --mouth-path MOUTH_PATH
                        [MOUTH] the output mouth images path

# default for test: this will generate the final cropped left eye,
# right eye, and mouth images in `./test/left_eye`, `./test/right_eye`
# , and `./test/mouth`
>>> python eye_mouth_crop.py
  • Note that the argument --face-roi-width and --face-roi-height should be maximized until there is a printed warning.

License

GPL-3.0 License

Reference

[1] Bulat, Adrian, and Georgios Tzimiropoulos. "How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks)." Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2017. GitHub: https://github.com/1adrianb/face-alignment

[2] Fischer, Tobias, Hyung Jin Chang, and Yiannis Demiris. "Rt-gene: Real-time eye gaze estimation in natural environments." Proceedings of the European Conference on Computer Vision (ECCV). 2018. GitHub: https://github.com/Tobias-Fischer/rt_gene

[3] Haliassos, Alexandros, et al. "Lips Don't Lie: A Generalisable and Robust Approach To Face Forgery Detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. GitHub: https://github.com/ahaliassos/LipForensics/

Owner
Zi-Han Liu
Senior @ SJTU
Zi-Han Liu
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
CVPR 2020 oral paper: Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax.

Overcoming Classifier Imbalance for Long-tail Object Detection with Balanced Group Softmax ⚠️ Latest: Current repo is a complete version. But we delet

FishYuLi 341 Dec 23, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
ZeroVL - The official implementation of ZeroVL

This repository contains source code necessary to reproduce the results presente

31 Nov 04, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis 🙈 A more detailed re

Lincedo Lab 4 Jun 09, 2021
This is the repository for Learning to Generate Piano Music With Sustain Pedals

SusPedal-Gen This is the official repository of Learning to Generate Piano Music With Sustain Pedals Demo Page Dataset The dataset used in this projec

Joann Ching 12 Sep 02, 2022
Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach

Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach This is the implementation of traffic prediction code in DTMP based on PyTo

chenxin 1 Dec 19, 2021
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Video Matting via Consistency-Regularized Graph Neural Networks

Video Matting via Consistency-Regularized Graph Neural Networks Project Page | Real Data | Paper Installation Our code has been tested on Python 3.7,

41 Dec 26, 2022
Intelligent Video Analytics toolkit based on different inference backends.

English | 中文 OpenIVA OpenIVA is an end-to-end intelligent video analytics development toolkit based on different inference backends, designed to help

Quantum Liu 15 Oct 27, 2022
You Only 👀 One Sequence

You Only 👀 One Sequence TL;DR: We study the transferability of the vanilla ViT pre-trained on mid-sized ImageNet-1k to the more challenging COCO obje

Hust Visual Learning Team 666 Jan 03, 2023
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis

O-CNN This repository contains the implementation of our papers related with O-CNN. The code is released under the MIT license. O-CNN: Octree-based Co

Microsoft 607 Dec 28, 2022