An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

Overview

MMGEN-FaceStylor

English | 简体中文

Introduction

This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning". We note that since the training code of AgileGAN is not released yet, this repo merely adopts the pipeline from AgileGAN and combines other helpful practices in this literature.

This project is based on MMCV and MMGEN, star and fork is welcomed 🤗 !

Results from FaceStylor trained by MMGEN

Requirements

  • CUDA 10.0 / CUDA 10.1
  • Python 3
  • PyTorch >= 1.6.0
  • MMCV-Full >= 1.3.15
  • MMGeneration >= 0.3.0

Setup

Step-1: Create an Environment

First, we should build a conda virtual environment and activate it.

conda create -n facestylor python=3.7 -y
conda activate facestylor

Suppose you have installed CUDA 10.1, you need to install the prebuilt PyTorch with CUDA 10.1.

conda install pytorch=1.6.0 cudatoolkit=10.1 torchvision -c pytorch
pip install requirements.txt

Step-2: Install MMCV and MMGEN

We can run the following command to install MMCV.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html

Of course, you can also refer to the MMCV Docs to install it.

Next, we should install MMGEN containing the basic generative models that will be used in this project.

# Clone the MMGeneration repository.
git clone https://github.com/open-mmlab/mmgeneration.git
cd mmgeneration
# Install build requirements and then install MMGeneration.
pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"
cd ..

Step-3: Clone repo and prepare the data and weights

Now, we need to clone this repo first.

git clone https://github.com/open-mmlab/MMGEN-FaceStylor.git

For convenience, we suggest that you make these folders under MMGEN-FaceStylor.

cd MMGEN-FaceStylor
mkdir data
mkdir work_dirs
mkdir work_dirs/experiments
mkdir work_dirs/pre-trained

Then, you can put or create the soft-link for your data under data folder, and store your experiments under work_dirs/experiments.

For testing and training, you need to download some necessary data provided by AgileGAN and put them under data folder. Or just run this:

wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1AavRxpZJYeCrAOghgtthYqVB06y9QJd3' -O data/shape_predictor_68_face_landmarks.dat

We also provide some pre-trained weights.

Pre-trained Weights
FFHQ-1024 StyleGAN2
FFHQ-256 StyleGAN2
IR-SE50 Model
Encoder for FFHQ-1024 StyleGAN2
Encoder for FFHQ-256 StyleGAN2
MetFace-Oil 1024 StyleGAN2
MetFace-Sketch 1024 StyleGAN2
Toonify 1024 StyleGAN2
Cartoon 256
Bitmoji 256
Comic 256
More Styles on the Way!

Play with MMGEN-FaceStylor

If you have followed the aforementioned steps, we can start to investigate FaceStylor!

Quick Try

To quickly try our project, please run the command below

python demo/quick_try.py demo/src.png --style toonify

Then, you can check the result in work_dirs/demos/agile_result.png.

  • If you want to play with your own photos, you can replace demo/src.png with your photo.
  • If you want to switch to another style, change toonify with other styles. Now, supported styles include toonify, oil, sketch, bitmoji, cartoon, comic.

Inversion

The inversion task will adopt a source image as input and return the most similar image that can be generated by the generator model.

For inversion, you can directly use agilegan_demo like this

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--ckpt CKPT] [--device DEVICE] [--save-path SAVE_PATH]

Here, you should set SOURCE_PATH to your image path, CONFIG to the config file path, and CKPT to checkpoint path.

Take Celebahq-Encoder as an example, you need to download the weights to work_dirs/pre-trained/agile_encoder_celebahq1024x1024_lr_1e-4_150k.pth, put your test image under data run

python demo/agilegan_demo.py demo/src.png configs/agilegan/agile_encoder_celebahq1024x1024_lr_1e-4_150k.py --ckpt work_dirs/pre-trained/agile_encoder_celebahq_lr_1e-4_150k.pth

You will find the result work_dirs/demos/agile_result.png.

Stylization

Since the encoder and decoder of stylization can be trained from different configs, you're supposed to set their ckpts' path in config file. Take Metface-oil as an example, you can see the first two lines in config file.

encoder_ckpt_path = xxx
stylegan_weights = xxx

You should keep your actual weights path in line with your configs. Then run the same command without specifying CKPT.

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--device DEVICE] [--save-path SAVE_PATH]

Train

Here I will tell you how to fine-tune with your own datasets. With only 100-200 images and less than one hour, you can train your own StyleGAN2. The only thing you need to do is to copy an agile_transfer config, like this one. Then modify the imgs_root with your actual data root, choose one of the two commands below to train your own model.

# For distributed training
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS_NUMBER} \
    --work-dir ./work_dirs/experiments/experiments_name \
    [optional arguments]
# For slurm training
bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${WORK_DIR} \
    [optional arguments]

Training Details

In this part, I will explain some training details, including ADA setting, layer freeze, and losses.

ADA Setting

To use ADA in your discriminator, you can use ADAStyleGAN2Discriminator as your discriminator, and adjust ADAAug setting as follows:

model = dict(
    discriminator=dict(
                 type='ADAStyleGAN2Discriminator',
                 data_aug=dict(type='ADAAug',
                 aug_pipeline=aug_kwargs, # This and below arguments can be set by yourself.
                 update_interval=4,
                 augment_initial_p=0.,
                 ada_target=0.6,
                 ada_kimg=500,
                 use_slow_aug=False)))

Layer Freeze Setting

FreezeD can be used for small data fine-tuning.

FreezeG can be used for pseudo translation.

model = dict(
  freezeD=5, # set to -1 if not need
  freezeG=4 # set to -1 if not need
  )

Losses Setting

In AgileGAN, to preserve the recognizable identity of the generated image, they introduce a similarity loss at the perceptual level. You can adjust the lpips_lambda as follows:

model = dict(lpips_lambda=0.8)

Generally speaking, the larger lpips_lambda is, the better the recognizable identity can be kept.

Datasets Link

To make it easier for you to train your own models, here are some links to publicly available datasets.

Dataset Links
MetFaces
AFHQ
Toonify
photo2cartoon
selfie2anime
face2comics v2
High-Resolution Anime Face
Bitmoji

Applications

We also provide LayerSwap and DNI apps for the trade-off between the structure of the original image and the stylization degree. To this end, you can adjust some parameters to get your desired result.

LayerSwap

When Layer Swapping is applied, the generated images have a higher similarity to the source image than AgileGAN's results.

From Left to Right: Input, Layer-Swap with L = 4, 3, 2, xxx Output

Run this command line to perform layer swapping:

python apps/layerSwap.py source_path modelA modelB \
      [--swap-layer SWAP_LAYER] [--device DEVICE] [--save-path SAVE_PATH]

Here, modelA is set to an PSPEncoderDecoder(config starts with agile_encoder) with FFHQ-StyleGAN2 as the decoder, modelB is set to an PSPEncoderDecoder(config starts with agile_encoder) with desired style generator as the decoder. Generally, the deeper you set swap-layer, the better structure of the original image will be kept.

We also provide a blending script to create and save the mixed weights.

python modelA modelB [--swap-layer SWAP_LAYER] [--show-input SHOW_INPUT] [--device DEVICE] [--save-path SAVE_PATH]

Here, modelA is the base model, where only the deep layers of its decoder will be replaced with modelB's counterpart.

DNI

Deep Network Interpolation between L4 and AgileGAN output

For more precise stylization control, you can try DNI with following commands:

python apps/dni.py source_path modelA modelB [--intervals INTERVALS] [--device DEVICE] [--save-folder SAVE_FOLDER]

Here, modelA and modelB are supposed to be PSPEncoderDecoder(configs start with agile_encoder) with decoders of different stylization degrees. INTERVALS is supposed to be the interpolation numbers.

You can also try applications in MMGEN, like interpolation and SeFA.

Interpolation


Indeed, we have provided an application script to users. You can use apps/interpolate_sample.py with the following commands for unconditional models’ interpolation:

python apps/interpolate_sample.py \
    ${CONFIG_FILE} \
    ${CHECKPOINT} \
    [--show-mode ${SHOW_MODE}] \
    [--endpoint ${ENDPOINT}] \
    [--interval ${INTERVAL}] \
    [--space ${SPACE}] \
    [--samples-path ${SAMPLES_PATH}] \
    [--batch-size ${BATCH_SIZE}] \

For more details, you can read related Docs.

Galary

Toonify





Oil





Cartoon





Comic





Bitmoji





Notions and TODOs

  • For encoder, I experimented with vae-encoder but found no significant improvement for inversion. I follow the "encoding into z plus space" way as the author does. I will release the vae-encoder version later, but I only offer a vanilla encoder this time.
  • For generator, I released vanilla stylegan2-generator, and attribute-aware generator will be released in next version.
  • For training settings, the parameters have slight difference from the paper. And I also tried ADA, freezeD and other methods not mentioned in paper.
  • More styles will be available in the next version.
  • More applications will be available in the next version.
  • We are also considering a web-side application.
  • Further code clean jobs.

Acknowledgments

Codes reference:

Display photos from: https://unsplash.com/t/people

Web demo powered by: https://gradio.app/

License

This project is released under the Apache 2.0 license. Some implementation in MMGEN-FaceStylor are with other licenses instead of Apache2.0. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.

Owner
OpenMMLab
OpenMMLab
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
Convert game ISO and archives to CD CHD for emulation on Linux.

tochd Convert game ISO and archives to CD CHD for emulation. Author: Tuncay D. Source: https://github.com/thingsiplay/tochd Releases: https://github.c

Tuncay 20 Jan 02, 2023
codebase for "A Theory of the Inductive Bias and Generalization of Kernel Regression and Wide Neural Networks"

Eigenlearning This repo contains code for replicating the experiments of the paper A Theory of the Inductive Bias and Generalization of Kernel Regress

Jamie Simon 45 Dec 02, 2022
NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

NCNN implementation of Real-ESRGAN. Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Xintao 593 Jan 03, 2023
An unreferenced image captioning metric (ACL-21)

UMIC This repository provides an unferenced image captioning metric from our ACL 2021 paper UMIC: An Unreferenced Metric for Image Captioning via Cont

hwanheelee 14 Nov 20, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

8 Sep 09, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
YOLOv5 + ROS2 object detection package

YOLOv5-ROS YOLOv5 + ROS2 object detection package This program changes the input of detect.py (ultralytics/yolov5) to sensor_msgs/Image of ROS2. Requi

Ar-Ray 23 Dec 19, 2022
Sequential Model-based Algorithm Configuration

SMAC v3 Project Copyright (C) 2016-2018 AutoML Group Attention: This package is a reimplementation of the original SMAC tool (see reference below). Ho

AutoML-Freiburg-Hannover 778 Jan 05, 2023
Realtime_Multi-Person_Pose_Estimation

Introduction Multi Person PoseEstimation By PyTorch Results Require Pytorch Installation git submodule init && git submodule update Demo Download conv

tensorboy 1.3k Jan 05, 2023
Image classification for projects and researches

This is a tool to help you quickly solve classification problems including: data analysis, training, report results and model explanation.

Nguyễn Trường Lâu 2 Dec 27, 2021
A nutritional label for food for thought.

Lexiscore As a first effort in tackling the theme of information overload in content consumption, I've been working on the lexiscore: a nutritional la

Paul Bricman 34 Nov 08, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

train-CLIP 📎 A PyTorch Lightning solution to training CLIP from scratch. Goal ⚽ Our aim is to create an easy to use Lightning implementation of OpenA

Cade Gordon 396 Dec 30, 2022
Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes

Naive-Bayes Predict Breast Cancer Wisconsin (Diagnostic) using Naive Bayes Downloading Data Set Use our Breast Cancer Wisconsin Data Set Also you can

Faeze Habibi 0 Apr 06, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022
PyTorch implementation HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections

HoroPCA This code is the official PyTorch implementation of the ICML 2021 paper: HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projec

HazyResearch 52 Nov 14, 2022
Official PyTorch implementation of the paper: DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample

DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample (ICCV 2021 Oral) Project | Paper Official PyTorch implementation of the pape

Eliahu Horwitz 393 Dec 22, 2022
Finetune SSL models for MOS prediction

Finetune SSL models for MOS prediction This is code for our paper under review for ICASSP 2022: "Generalization Ability of MOS Prediction Networks" Er

Yamagishi and Echizen Laboratories, National Institute of Informatics 32 Nov 22, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022