CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

Overview

CharacterGAN

Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wang, Eli Shechtman, and Stefan Wermter (open with Adobe Acrobat or similar to see visualizations).

Supplementary material can be found here.

Our model can be trained on only a few images (e.g. 10) of a given character labeled with user-chosen keypoints. The resulting model can be used to animate the character on which it was trained by interpolating between its poses specified by their keypoints. We can also repose characters by simply moving the keypoints into the desired positions. To train the model all we need are few images depicting the character in diverse poses from the same viewpoint, keypoints, a file that describes how the keypoints are connected (the characters skeleton) and which keypoints lie in the same layer.

Examples

Animation: For all examples the model was trained on 8-15 images (see first row) of the given character.

Training Images 12 15 9 12 15 15 8
Animation dog_animation maddy_animation ostrich_animation man_animation robot_animation man_animation cow_animation



Frame interpolation: Example of interpolations between two poses with the start and end keypoints highlighted.

man man man man man man man man man man man man man
dog dog dog dog dog dog dog dog dog dog dog dog dog



Reposing: You can use our interactive GUI to easily repose a given character based on keypoints.

Interactive dog_gui man_gui
Gui cow_gui man_gui

Installation

  • python 3.8
  • pytorch 1.7.1
pip install -r requirements.txt

Training

Training Data

All training data for a given character should be in a single folder. We used this website to label our images but there are of course other possibilities.

The folder should contain:

  • all training images (all in the same resolution),
  • a file called keypoints.csv (containing the keypoints for each image),
  • a file called keypoints_skeleton.csv (containing skeleton information, i.e. how keypoints are connected with each other), and
  • a file called keypoints_layers.csv (containing the information about which layer each keypoint resides in).

The structure of the keypoints.csv file is (no header): keypoint_label,x_coord,y_coord,file_name. The first column describes the keypoint label (e.g. head), the next two columns give the location of the keypoint, and the final column states which training image this keypoint belongs to.

The structure of the keypoints_skeleton.csv file is (no header): keypoint,connected_keypoint,connected_keypoint,.... The first column describes which keypoint we are describing in this line, the following columns describe which keypoints are connected to that keypoint (e.g. elbow, shoulder, hand would state that the elbow keypoint should be connected to the shoulder keypoint and the hand keypoint).

The structure of the keypoints_layers.csv file is (no header): keypoint,layer. "Keypoint" is the keypoint label (same as used in the previous two files) and "layer" is an integer value desribing which layer the keypoint resides in.

See our example training data in datasets for examples of both files.

We provide two examples (produced by Zuzana Studená) for training, located in datasets. Our other examples were trained on data from Adobe Stock or from Character Animator and I currently have no license to distribute them. You can purchase the Stock data here:

  • Man: we used all images
  • Dog: we used all images
  • Ostrich: we used the first nine images
  • Cow: we used the first eight images

There are also several websites where you can download Sprite sheets for free.

Train a Model

To train a model with the default parameters from our paper run:

python train.py --gpu_ids 0 --num_keypoints 14 --dataroot datasets/Watercolor-Man --fp16 --name Watercolor-Man

Training one model should take about 60 (FP16) to 90 (FP32) minutes on an NVIDIA GeForce GTX 2080Ti. You can usually use fewer iterations for training and still achieve good results (see next section).

Training Parameters

You can adjust several parameters at train time to possibly improve your results.

  • --name to change the name of the folder in which the results are stored (default is CharacterGAN-Timestamp)
  • --niter 4000 and --niter_decay 4000 to adjust the number of training steps (niter_decayis the number of training steps during which we reduce the learning rate linearly; default is 8000 for both, but you can get good results with fewer iterations)
  • --mask True --output_nc 4 to train with a mask
  • --skeleton False to train without skeleton information
  • --bkg_color 0 to set the background color of the training images to black (default is white, only important if you train with a mask)
  • --batch_size 10 to train with a different batch size (default is 5)

The file options/keypoints.py lets you modify/add/remove keypoints for your characters.

Results

The output is saved to checkpoints/ and we log the training process with Tensorboard. To monitor the progress go to the respective folder and run

 tensorboard --logdir .

Testing

At test time you can either use the model to animate the character or use our interactive GUI to change the position of individual keypoints.

Animate Character

To animate a character (or create interpolations between two images):

python animate_example.py --gpu_ids 0 --model_path checkpoints/Watercolor-Man-.../ --img_animation_list datasets/Watercolor-Man/animation_list.txt --dataroot datasets/Watercolor-Man

--img_animation_list points to a file that lists the images that should be used for animation. The file should contain one file name per line pointing to an image in dataroot. The model then generates an animation by interpolating between the images in the given order. See datasets/Watercolor-Man/animation_list.txt for an example.

You can add --draw_kps to visualize the keypoints in the animation. You can specifiy the gif parameters by setting --num_interpolations 10 and --fps 5. num_interpolations specifies how many images are generated between two real images (from img_animation_list), fps determines the frames per second of the generated gif.

Modify Individual Keypoints

To run the interactive GUI:

python visualizer.py --gpu_ids 0 --model_path checkpoints/Watercolor-Man-.../

Set --gpu_ids -1 to run the model on a CPU. You can also scale the images during visualization, e.g. use --scale 2.

Patch-based Refinement

We use this implementation to run the patch-based refinement step on our generated images. The easiest way to do this is to merge all your training images into a single large image file and use this image file as the style and source image.

Acknowledgements

Our implementation uses code from Pix2PixHD, the TPS augmentation from DeepSIM, and the patch-based refinement code from https://ebsynth.com/ (GitHub).

We would also like to thank Zuzana Studená who produced some of the artwork used in this work.

Citation

If you found this code useful please consider citing:

@article{hinz2021character,
    author    = {Hinz, Tobias and Fisher, Matthew and Wang, Oliver and Shechtman, Eli and Wermter, Stefan},
    title     = {CharacterGAN: Few-Shot Keypoint Character Animation and Reposing},
    journal = {arXiv preprint arXiv:2102.03141},
    year      = {2021}
}
Owner
Tobias Hinz
Research Associate at University of Hamburg
Tobias Hinz
TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction.

TalkNet 2 [WIP] TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Predictio

Rishikesh (ऋषिकेश) 69 Dec 17, 2022
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral) Figure: Face image editing controlled via style images and segmenta

Peihao Zhu 579 Dec 30, 2022
sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

sequitur sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three differ

Jonathan Shobrook 305 Dec 21, 2022
Curvlearn, a Tensorflow based non-Euclidean deep learning framework.

English | 简体中文 Why Non-Euclidean Geometry Considering these simple graph structures shown below. Nodes with same color has 2-hop distance whereas 1-ho

Alibaba 123 Dec 12, 2022
DeLighT: Very Deep and Light-Weight Transformers

DeLighT: Very Deep and Light-weight Transformers This repository contains the source code of our work on building efficient sequence models: DeFINE (I

Sachin Mehta 440 Dec 18, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
Official pytorch implementation of paper Dual-Level Collaborative Transformer for Image Captioning (AAAI 2021).

Dual-Level Collaborative Transformer for Image Captioning This repository contains the reference code for the paper Dual-Level Collaborative Transform

lyricpoem 160 Dec 11, 2022
I tried to apply the CAM algorithm to YOLOv4 and it worked.

YOLOV4:You Only Look Once目标检测模型在pytorch当中的实现 2021年2月7日更新: 加入letterbox_image的选项,关闭letterbox_image后网络的map得到大幅度提升。 目录 性能情况 Performance 实现的内容 Achievement

55 Dec 05, 2022
High frequency AI based algorithmic trading module.

Flow Flow is a high frequency algorithmic trading module that uses machine learning to self regulate and self optimize for maximum return. The current

59 Dec 14, 2022
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
Interpretation of T cell states using reference single-cell atlases

Interpretation of T cell states using reference single-cell atlases ProjecTILs is a computational method to project scRNA-seq data into reference sing

Cancer Systems Immunology Lab 139 Jan 03, 2023
x-transformers-paddle 2.x version

x-transformers-paddle x-transformers-paddle 2.x version paddle 2.x版本 https://github.com/lucidrains/x-transformers 。 requirements paddlepaddle-gpu==2.2

yujun 7 Dec 08, 2022
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
Caffe models in TensorFlow

Caffe to TensorFlow Convert Caffe models to TensorFlow. Usage Run convert.py to convert an existing Caffe model to TensorFlow. Make sure you're using

Saumitro Dasgupta 2.8k Dec 31, 2022
Learning What and Where to Draw

###Learning What and Where to Draw Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, Honglak Lee This is the code for our NIPS 201

Scott Ellison Reed 337 Nov 18, 2022
Image Matching Evaluation

Image Matching Evaluation (IME) IME provides to test any feature matching algorithm on datasets containing ground-truth homographies. Also, one can re

32 Nov 17, 2022
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022