Human motion synthesis using Unity3D

Overview

Human motion synthesis using Unity3D

Prerequisite:

Software: amc2bvh.exe, Unity 2017, Blender.
Unity: RockVR (Video Capture), scenes, character models Files:
Motion files: amc, asf or bvh formats.
Character models: fbx format.

Procedure

  1. If motion files in amc/asf format, run amc2bvh.exe to convert them to bvh
  2. Place all bvh files into "Desktop/New folder/bvh" (or modify script)
  3. Open Blender and run the bvh2fbx.py script. It will convert the motion files to fbx format which Unity can process and place them under the unity "Resources/Input"[1]
  4. Find the imported motion file in Unity and change its Animation Type to Humanoid under Rig. Check to make sure the model is mapped properly.
  5. Configure the different variations to record video (characters, camera angle, scene, lighting)
    1. For characters, add[2] or remove from the "characters" GameObject in Unity Editor for the ones desired. For new character added to the scene, add the "New Animation Controller"[3] in Asset to the character's controller in the "Animator" section.
    2. For camera, change the position of the DedicatedCapture GameObjects to the desired location. Add additional DedicatedCapture GameObjects for more angle. Read the documentation for RockVR Video Capture for more detail.
    3. For scene, check the desired scenes within the intro scene and run.
    4. For lighting, change the "lights" parameter in Automation.cs script. Add more values to the array for more variations in lighting angles.
  6. Start up the "intro" scene and run it from Unity Editor. Click "Start" button to start the problem.
  7. Adjust the desired resolution and framerate and click start. For initial run, leave all the counters to 0. For continuing runs enter the counters where the previous run left off. The videos will be recorded to "Documents/RockVR/Video"[4]

Note

  • [1] Converting too many bvh files at a time may result in Blender crashing. Try converting them in batches of smaller quantity (~50).
  • [2] To add a GameObject to a Scene in Unity, drag it from the Asset menu to a position in the Hierarchy menu or a position in the scene itself. You can also create an empty GameObject from the "GameObject->Create Empty" option.
  • [3] Depending on the framerate of the motion files, you may need to adjust the speed of the animation. To do this go to "Assets" and find the "New Animator Controller" and open it. Then click on "New State" and adjust the speed to framerate/24 (if 120 frames changes to 5, if 60 change to 2.5, etc). Also find the line "timeLeft = ((AnimationClip)clips[clipCounter]).length;" in the SwitchAnimation function and divide it by the speed.
  • [4] Unity will most likely freeze or crash if left running for too long. Adjust the counters in the "intro" scene to resume progress.

Scene Creation procedure

  1. To get a scene, either download a pre-built one or build one yourself using various 3d models for GameObjects.
  2. Create an empty GameObject named "characters" and place it at a location best suited for recording. Add a character to it to see if any adjusting or scaling is needed.
  3. Add DedicatedCapture GameObjects from the "RockVR/Video/Prefabs" folder to the scene in desired locations.
  4. Attach the AudioCapture script in "RockVR/Video/Scripts" folder to the main camera.
  5. Create an empty GameObject named "VideoCaptureCtrl" and attach the VideoCaptureCtrl script in "RockVR/Video/Scripts" to it. Also attach the Automation.cs script from "Scripts" to it as well.
  6. Add the first DedicatedCapture GameObject as well as the AudioCapture to the the VideoCaptureCtrl script.
  7. If there is no "Directional light" GameObject, create one.
  8. Add the created scene to build settings.
  9. Add a check box in the intro scene for the newly created scene and modify the scene "ProcessParameter" accordingly.

Additional characters

In the "characters" folder in Assets, there is a list of preprocessed characters I got from the Unity asset store for free.
To process new characters:

  1. Change its Animation type to Humanoid under Rig
  2. Fix any mapping problem for the bones of the character
  3. Remove the mapping on the bones for both hands. This could be done using the "New Human Template" in the Assets folder. (This is to avoid weird finger mapping from the animations)

Instructions on error handling

  • If you tried to terminate the program insider the Unity Editor, the ffmpeg.exe will still be running and result in unfinished video and audio files to remain in the videos folder. To solve this issue, simply terminate the ffmpeg.exe from task manager and delete the unfinished files.
  • Since the program freezes fairly often, a temporary save state feature is implemented. Once Unity froze, terminate it from task manager. Look into the videos folder and figure out what combination the next video should be. Enter the parameters where the last run left off in the "intro" scene (various counters) to pick up from there.

Local environment specs

  • OS: Microsoft Windows 10 Pro
  • Version: 10.0.16299 Build 16299
  • Processor: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz, 2201 Mhz, 10 Core(s), 20 Logical Processor(s)
  • Total Physical Memory: 63.9 GB
  • GPU: NVIDIA Quadro M5000
Owner
Hao Xu
Hao Xu
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
A system used to detect whether a person is wearing a medical mask or not.

Mask_Detection_System A system used to detect whether a person is wearing a medical mask or not. To open the program, please follow these steps: Make

Mohamed Emad 0 Nov 17, 2022
AITUS - An atomatic notr maker for CYTUS

AITUS an automatic note maker for CYTUS. 利用AI根据指定乐曲生成CYTUS游戏谱面。 效果展示:https://www

GradiusTwinbee 6 Feb 24, 2022
[BMVC'21] Official PyTorch Implementation of Grounded Situation Recognition with Transformers

Grounded Situation Recognition with Transformers Paper | Model Checkpoint This is the official PyTorch implementation of Grounded Situation Recognitio

Junhyeong Cho 18 Jul 19, 2022
LabelImg is a graphical image annotation tool.

LabelImgPlus LabelImg is a graphical image annotation tool. This project is not updated with new functions now. More functions are supported with Labe

lzx1413 200 Dec 20, 2022
QilingLab challenge writeup

qiling lab writeup shielder 在 2021/7/21 發布了 QilingLab 來幫助學習 qiling framwork 的用法,剛好最近有用到,順手解了一下並寫了一下 writeup。 前情提要 Qiling 是一款功能強大的模擬框架,和 qemu user mode

Yuan 17 Nov 17, 2022
ML-Decoder: Scalable and Versatile Classification Head

ML-Decoder: Scalable and Versatile Classification Head Paper Official PyTorch Implementation Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baru

189 Jan 04, 2023
Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022

Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes / 3DCrowdNet News 💪 3DCrowdNet achieves the state-of-the-art accuracy on 3D

Hongsuk Choi 113 Dec 21, 2022
Created as part of CS50 AI's coursework. This AI makes use of knowledge entailment to calculate the best probabilities to win Minesweeper.

Minesweeper-AI Created as part of CS50 AI's coursework. This AI makes use of knowledge entailment to calculate the best probabilities to win Minesweep

Beckham 0 Jul 20, 2022
Toolchain to build Yoshi's Island from source code

Project-Y Toolchain to build Yoshi's Island (J) V1.0 from source code, by MrL314 Last updated: September 17, 2021 Setup To begin, download this toolch

MrL314 19 Apr 18, 2022
Annealed Flow Transport Monte Carlo

Annealed Flow Transport Monte Carlo Open source implementation accompanying ICML 2021 paper by Michael Arbel*, Alexander G. D. G. Matthews* and Arnaud

DeepMind 30 Nov 21, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 04, 2023
Training RNNs as Fast as CNNs

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color

The World of an Octopus: How Reporting Bias Influences a Language Model's Perception of Color Overview Code and dataset for The World of an Octopus: H

1 Nov 13, 2021
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
Pre-Training Graph Neural Networks for Cold-Start Users and Items Representation.

Pretrain-Recsys This is our Tensorflow implementation for our WSDM 2021 paper: Bowen Hao, Jing Zhang, Hongzhi Yin, Cuiping Li, Hong Chen. Pre-Training

30 Nov 14, 2022
A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning Website • About • Installation • Using OpenDR

OpenDR 304 Dec 28, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023