EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

Overview

EasyMocap

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

In this project, we provide the basic code for fitting SMPL[1]/SMPL+H[2]/SMPLX[3] model to capture body+hand+face poses from multiple views.

Input ✔️ Skeleton ✔️ SMPL
input repro smpl

We plan to intergrate more interesting algorithms, please stay tuned!

  1. [CVPR19] Multi-Person from Multiple Views
  2. [ECCV20] Mocap from Multiple Uncalibrated and Unsynchronized Videos
  3. Dense Reconstruction and View Synthesis from Sparse Views

Installation

1. Download SMPL models

This step is the same as smplx.

To download the SMPL model go to this (male and female models, version 1.0.0, 10 shape PCs) and this (gender neutral model) project website and register to get access to the downloads section.

To download the SMPL+H model go to this project website and register to get access to the downloads section.

To download the SMPL-X model go to this project website and register to get access to the downloads section.

Place them as following:

data
└── smplx
    ├── J_regressor_body25.npy
    ├── J_regressor_body25_smplh.txt
    ├── J_regressor_body25_smplx.txt
    ├── smpl
    │   ├── SMPL_FEMALE.pkl
    │   ├── SMPL_MALE.pkl
    │   └── SMPL_NEUTRAL.pkl
    ├── smplh
    │   ├── MANO_LEFT.pkl
    │   ├── MANO_RIGHT.pkl
    │   ├── SMPLH_FEMALE.pkl
    │   └── SMPLH_MALE.pkl
    └── smplx
        ├── SMPLX_FEMALE.pkl
        ├── SMPLX_MALE.pkl
        └── SMPLX_NEUTRAL.pkl

2. Requirements

  • torch==1.4.0
  • torchvision==0.5.0
  • opencv-python
  • pyrender: for visualization
  • chumpy: for loading SMPL model
  • OpenPose[4]: for 2D pose

Some of python libraries can be found in requirements.txt. You can test different version of PyTorch.

Quick Start

We provide an example multiview dataset[dropbox][BaiduDisk(vg1z)], which has 800 frames from 23 synchronized and calibrated cameras. After downloading the dataset, you can run the following example scripts.

data=path/to/data
out=path/to/output
# 0. extract the video to images
python3 scripts/preprocess/extract_video.py ${data}
# 1. example for skeleton reconstruction
python3 code/demo_mv1pmf_skel.py ${data} --out ${out} --vis_det --vis_repro --undis --sub_vis 1 7 13 19
# 2.1 example for SMPL reconstruction
python3 code/demo_mv1pmf_smpl.py ${data} --out ${out} --end 300 --vis_smpl --undis --sub_vis 1 7 13 19 --gender male
# 2.2 example for SMPL-X reconstruction
python3 code/demo_mv1pmf_smpl.py ${data} --out ${out} --undis --body bodyhandface --sub_vis 1 7 13 19 --start 400 --model smplx --vis_smpl --gender male
# 3.1 example for rendering SMPLX to ${out}/smpl
python3 code/vis_render.py ${data} --out ${out} --skel ${out}/smpl --model smplx --gender male --undis --start 400 --sub_vis 1
# 3.2 example for rendering skeleton of SMPL to ${out}/smplskel
python3 code/vis_render.py ${data} --out ${out} --skel ${out}/smpl --model smplx --gender male --undis --start 400 --sub_vis 1 --type smplskel --body bodyhandface

Not Quick Start

0. Prepare Your Own Dataset

zju-ls-feng
├── intri.yml
├── extri.yml
└── videos
    ├── 1.mp4
    ├── 2.mp4
    ├── ...
    ├── 8.mp4
    └── 9.mp4

The input videos are placed in videos/.

Here intri.yml and extri.yml store the camera intrinsici and extrinsic parameters. For example, if the name of a video is 1.mp4, then there must exist K_1, dist_1 in intri.yml, and R_1((3, 1), rotation vector of camera), T_1(3, 1) in extri.yml. The file format is following OpenCV format.

1. Run OpenPose

data=path/to/data
out=path/to/output
python3 scripts/preprocess/extract_video.py ${data} --openpose <openpose_path> --handface
  • --openpose: specify the openpose path
  • --handface: detect hands and face keypoints

2. Run the code

# 1. example for skeleton reconstruction
python3 code/demo_mv1pmf_skel.py ${data} --out ${out} --vis_det --vis_repro --undis --sub_vis 1 7 13 19
# 2. example for SMPL reconstruction
python3 code/demo_mv1pmf_smpl.py ${data} --out ${out} --end 300 --vis_smpl --undis --sub_vis 1 7 13 19

The input flags:

  • --undis: use to undistort the images
  • --start, --end: control the begin and end number of frames.

The output flags:

  • --vis_det: visualize the detection
  • --vis_repro: visualize the reprojection
  • --sub_vis: use to specify the views to visualize. If not set, the code will use all views
  • --vis_smpl: use to render the SMPL mesh to images.

3. Output

The results are saved in json format.

<output_root>
├── keypoints3d
│   ├── 000000.json
│   └── xxxxxx.json
└── smpl
    ├── 000000.jpg
    ├── 000000.json
    └── 000004.json

The data in keypoints3d/000000.json is a list, each element represents a human body.

{
    'id': <id>,
    'keypoints3d': [[x0, y0, z0, c0], [x1, y1, z0, c1], ..., [xn, yn, zn, cn]]
}

The data in smpl/000000.json is also a list, each element represents the SMPL parameters which is slightly different from official model.

{
    "id": <id>,
    "Rh": <(1, 3)>,
    "Th": <(1, 3)>,
    "poses": <(1, 72/78/87)>,
    "expression": <(1, 10)>,
    "shapes": <(1, 10)>
}

We set the first 3 dimensions of poses to zero, and add a new parameter Rh to represents the global oritentation, the vertices of SMPL model V = RX(theta, beta) + T.

If you use SMPL+H model, the poses contains 22x3+6+6. We use 6 pca coefficients for each hand. 3(jaw, left eye, right eye)x3 poses of head are added for SMPL-X model.

Evaluation

In our code, we do not set the best weight parameters, you can adjust these according your data. If you find a set of good weights, feel free to tell me.

We will add more quantitative reports in doc/evaluation.md

Acknowledgements

Here are the great works this project is built upon:

  • SMPL models and layer are from MPII SMPL-X model.
  • Some functions are borrowed from SPIN, VIBE, SMPLify-X
  • The method for fitting 3D skeleton and SMPL model is similar to TotalCapture, without using point cloud.

We also would like to thank Wenduo Feng who is the performer in the sample data.

Contact

Please open an issue if you have any questions.

Citation

This project is a part of our work iMocap and Neural Body

Please consider citing these works if you find this repo is useful for your projects.

@inproceedings{dong2020motion,
  title={Motion capture from internet videos},
  author={Dong, Junting and Shuai, Qing and Zhang, Yuanqing and Liu, Xian and Zhou, Xiaowei and Bao, Hujun},
  booktitle={European Conference on Computer Vision},
  pages={210--227},
  year={2020},
  organization={Springer}
}

@article{peng2020neural,
  title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
  author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  journal={arXiv preprint arXiv:2012.15838},
  year={2020}
}

Reference

[1] Loper, Matthew, et al. "SMPL: A skinned multi-person linear model." ACM transactions on graphics (TOG) 34.6 (2015): 1-16.
[2] Romero, Javier, Dimitrios Tzionas, and Michael J. Black. "Embodied hands: Modeling and capturing hands and bodies together." ACM Transactions on Graphics (ToG) 36.6 (2017): 1-17.
[3] Pavlakos, Georgios, et al. "Expressive body capture: 3d hands, face, and body from a single image." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
Bogo, Federica, et al. "Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image." European conference on computer vision. Springer, Cham, 2016.
[4] Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: real-time multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)
Comments
  • Conversion to FBX

    Conversion to FBX

    I saw the great work you all done to have the bvh conversion working.

    here is my approach to convert to FBX adapting the code from VIBE mocap:

    install pandas pip install pandas

    install bpy via pip pip install bpy

    if pip install bpy doesnt work, install using conda (if you have conda installed) conda install -c kitsune.one python-blender

    copy blender 2.82 files to the python executable. If you use conda on the base enviroment or virtual enviroment, you can fid tha directory using

    conda env list

    Copy the 2.82 folder from blender download nd paste to the enviroment you are using.

    and save the "fbx_output_EasyMocap" at this folder:"scripts/postprocess"

    and finish doing these instructions of downloading the SMPL for unity https://github.com/mkocabas/VIBE#fbx-and-gltf-output-new-feature

    then you can test using the demo video file

    python code/demo_mv1pmf_smpl.py demo_test --out out_demo_test --end 300 --vis_smpl --undis --sub_vis 1 7 13 19 --gender male

    then test the fbx conversion with python scripts/postprocess/fbx_output_EasyMocap.py --input out_demo_test/smpl/ --output out_demo_test/easymocap.fbx --fps_source 30 --fps_target 30 --gender male

    The fbx conversion script: fbx_output_EasyMocap.zip

    opened by carlosedubarreto 26
  • mode openpose change yolo-hrnet  error

    mode openpose change yolo-hrnet error

    When I ran the script extract_video-.py, I changed the mode default openpose to yolo-hrnet and this error occurred. I'm not sure if this is caused by missing files 54

    opened by zhanghongyong123456 21
  • Converting Output SMPL  to the  standard SMPL parameters

    Converting Output SMPL to the standard SMPL parameters

    Hi, As it is mentioned in thedocumentation, the output representation of SMPL is slightly different from the original SMPL parameters, is there any way to convert output SMPL with Rh and Th to the standard SMPL model parameters? Thanks very much

    opened by leilaUEA 20
  • Twisting occuring

    Twisting occuring

    hi there

    This model works well, but I have found that there is some 'twisting' that happens when the mesh twists 360 degrees. I suspect that more cameras solves this (in the example attached I'm using 4 cameras)

    Are there any known solutions for this?

    Thanks

    https://user-images.githubusercontent.com/38970401/145704581-94de841f-4232-44c1-8b95-f3fe9d7351aa.mp4

    opened by jamalknight 12
  • json.decoder error (Mediapipe)

    json.decoder error (Mediapipe)

    Hello.. thanks for this amazing development.. I was trying to use mediapipe for MV1P and i followed these steps 1 - python apps/preprocess/extract_image.py 0_input\project 2 - python apps/preprocess/extract_keypoints.py 0_input\project --mode mp-pose and 3 - python apps/demo/mv1p.py 0_input/project --out 1_output/project --vis_det --vis_repro --sub_vis 1 2 3 4 --vis_smpl

    but when i run 3rd steps (mv1p.py), getting this error

    Traceback (most recent call last): File "apps/demo/mv1p.py", line 117, in mv1pmf_skel(dataset, check_repro=True, args=args) File "apps/demo/mv1p.py", line 35, in mv1pmf_skel images, annots = dataset[nf] File "c:\mocap\easymocap\easymocap\dataset\mv1pmf.py", line 72, in getitem images, annots_all = super().getitem(index) File "c:\mocap\easymocap\easymocap\dataset\base.py", line 482, in getitem annot = read_annot(annname, self.kpts_type) File "c:\mocap\easymocap\easymocap\mytools\file_utils.py", line 46, in read_annot data = read_json(annotname) File "c:\mocap\easymocap\easymocap\mytools\file_utils.py", line 19, in read_json data = json.load(f) File "C:\Users\Vinayky86\anaconda3\envs\easymocap\lib\json_init_.py", line 296, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "C:\Users\Vinayky86\anaconda3\envs\easymocap\lib\json_init_.py", line 348, in loads return _default_decoder.decode(s) File "C:\Users\Vinayky86\anaconda3\envs\easymocap\lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "C:\Users\Vinayky86\anaconda3\envs\easymocap\lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid \escape: line 2 column 19 (char 20)

    Please correct me if I am doing any wrong steps thank you ...!!!

    opened by Vinayky86 11
  • check calibration cube

    check calibration cube

    I'm not sure what I'm doing wrong, but these are the images I get when checking calibration for 2 cameras

    image image

    when using the first test, I get much more reasonble results image image

    While in the documentation it shows a much more clean cube, I also noticed the issue #71 had similar problems. Any idea what may be going on? I manually labeled the checkerboard so I don't believe its a problem with that (though they may not be the highest possible precision)

    Any idea whats going on?

    opened by pablovela5620 11
  • Visualisizer with SMPLX-model possible?

    Visualisizer with SMPLX-model possible?

    Hello, after trying different options inside of o3d_scene.yml i could only get the skeleton model to run. I assume i must change some arguments for:

    body_model: module: "easymocap.visualize.skelmodel.SkelModel" args: body_type: "body25" <- bodyhand? (but its not working, i was checking config.py for this but i guess i made some mistakes) joint_radius: 0.02 gender: "neutral" model_type: "smpl" <- smplh or smplx? (not working either)

    On your example-page there is an animated fullbody-geo (smplx?) shown so i guess its possible to visualize a whole body right? Thanks again ...

    opened by FrankSpalteholz 10
  • Errors on Linux (Ubuntu) when running quickstart example

    Errors on Linux (Ubuntu) when running quickstart example

    First of all thank you very much for this great repo! When trying the quick-start example i'm facing some errors where i'd really appreciate some help:

    1. After processing the videos:

    Traceback (most recent call last): File "scripts/preprocess/extract_video.py", line 267, in join(args.path, 'openpose_render', sub), args) File "scripts/preprocess/extract_video.py", line 56, in extract_2d os.chdir(openpose) FileNotFoundError: [Errno 2] No such file or directory: '/media/qing/Project/openpose'

    1. After triangulation:

    -> [Optimize global RT ]: 0.9ms Traceback (most recent call last): File "apps/demo/mv1p.py", line 109, in mv1pmf_smpl(dataset, args) File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl weight_shape=weight_shape, weight_pose=weight_pose) File "/home/frankfurt/dev/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg) File "/home/frankfurt/dev/EasyMocap/easymocap/pipeline/basic.py", line 18, in multi_stage_optimize params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg) File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg) File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL final_loss = fitting.run_fitting(optimizer, closure, opt_params) File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting loss = optimizer.step(closure) File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/lbfgs.py", line 307, in step orig_loss = closure() File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 227, in closure new_params = func(new_params) File "/home/frankfurt/dev/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 121, in interp_func params[key][nf] = interp(params[key][left], params[key][right], 1-weight, key=key) IndexError: index 800 is out of bounds for dimension 0 with size 800

    Then triangulation runs a second time: (with same error like above) and this one:

    Traceback (most recent call last): File "apps/demo/mv1p.py", line 108, in mv1pmf_skel(dataset, check_repro=True, args=args) File "apps/demo/mv1p.py", line 35, in mv1pmf_skel images, annots = dataset[nf] File "/home/frankfurt/dev/EasyMocap/easymocap/dataset/mv1pmf.py", line 73, in getitem annots = self.select_person(annots_all, index, self.pid) File "/home/frankfurt/dev/EasyMocap/easymocap/dataset/base.py", line 461, in select_person keypoints = np.zeros((self.config['nJoints'], 3)) KeyError: 'nJoints'

    Thank you very much!

    opened by FrankSpalteholz 10
  • Camera extrinsic parameters

    Camera extrinsic parameters

    Thank you for your perfect work. I have a question about the extri.yml of the provided example multiview dataset. In the provided extri.yml, what's the difference between the R_1 and Rot_1? If I prepared my own datasets, should I put the rotation vector of camera in Rot_1? What's more, how to obtain the R_1?

    Another question is if I obtained the smpl parameter in json, how can I obtain the mesh?

    Thank you very much.

    opened by Xianjin111 9
  • Gradient stuck when optimizing smpl parameters

    Gradient stuck when optimizing smpl parameters

    I tried to estimate the spml parameters of a real scene based on the colab demo, and the gradient got stuck after 3 to 5 iterations. Camera parameters are estimated based on calibration app (error < 1.5 pixel), and keypoints are estimated based on openpose. From the visualization results, we can see that keypoints2d is correct, any ideas about why the gradient stucks?

    s3d 0.340 reg_shapes 0.000
    s3d 0.321 reg_shapes 0.001
    s3d 0.198 reg_shapes 0.036
    s3d 0.159 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    s3d 0.158 reg_shapes 0.067
    
    opened by xyyeah 8
  • RuntimeError: The size of tensor a (42) must match the size of tensor b (0) at non-singleton dimension 1

    RuntimeError: The size of tensor a (42) must match the size of tensor b (0) at non-singleton dimension 1

    Demo code for multiple views and one person:

    - Input : /data/CMU_pose/171026_pose2_sample2 => 00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29
    - Output: /data/CMU_pose/171026_pose2_sample2/output/smplx
    - Body  : smplx=>male, body25
    

    triangulation: 100%|##############################################################################| 297/297 [44:23<00:00, 8.97s/it] dump: 100%|#####################################################################################| 297/297 [00:00<00:00, 4726.59it/s] loading: 100%|####################################################################################| 297/297 [00:03<00:00, 83.65it/s] /EasyMocap/easymocap/pyfitting/lbfgs.py:264: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:1025.) p.data.add_(step_size, update[offset:offset + numel].view_as(p.data)) -> [Optimize global RT ]: 3.7s -> [Optimize 3D Pose/297 frames]: 19.6s Traceback (most recent call last): File "apps/demo/mv1p.py", line 117, in mv1pmf_smpl(dataset, args) File "apps/demo/mv1p.py", line 71, in mv1pmf_smpl weight_shape=weight_shape, weight_pose=weight_pose) File "/EasyMocap/easymocap/pipeline/basic.py", line 77, in smpl_from_keypoints3d2d params = multi_stage_optimize(body_model, params, kp3ds, kp2ds, bboxes, Pall, weight_pose, cfg) File "/EasyMocap/easymocap/pipeline/basic.py", line 29, in multi_stage_optimize params = optimizePose3D(body_model, params, kp3ds, weight=weight, cfg=cfg) File "/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 301, in optimizePose3D params = _optimizeSMPL(body_model, params, prepare_funcs, postprocess_funcs, loss_funcs, weight_loss=weight, cfg=cfg) File "/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 246, in _optimizeSMPL final_loss = fitting.run_fitting(optimizer, closure, opt_params) File "/EasyMocap/easymocap/pyfitting/optimize.py", line 38, in run_fitting loss = optimizer.step(closure) File "/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/EasyMocap/easymocap/pyfitting/lbfgs.py", line 307, in step orig_loss = closure() File "/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 231, in closure loss_dict = {key:func(kpts_est=kpts_est, **new_params) for key, func in loss_funcs.items()} File "/EasyMocap/easymocap/pyfitting/optimize_simple.py", line 231, in loss_dict = {key:func(kpts_est=kpts_est, **new_params) for key, func in loss_funcs.items()} File "/EasyMocap/easymocap/pyfitting/lossfactory.py", line 63, in hand diff_square = (kpts_est[:, 25:25+42, :3] - self.keypoints3d[:, 25:25+42, :3])*self.conf[:, 25:25+42] RuntimeError: The size of tensor a (42) must match the size of tensor b (0) at non-singleton dimension 1

    Thank you very much for your awesome work I also met this error when I tried to run the code with smplx. I deleted the keypoints3d folder and rerun the code, but it did not work? How can I solve this?

    opened by neilgogogo 8
  • Output-smpl-3\\smplmesh error

    Output-smpl-3\\smplmesh error

    Hello, I have a problem with the monocular demo using this command : python apps/demo/mocap.py ${data} --work internet that I found here : https://chingswy.github.io/easymocap-public-doc/quickstart/quickstart.html it gives me this error (the dataset is the example files for this command) :

    (mocap) C:\Users\miste\Desktop\MOCAP\EasyMocap-master>python apps/demo/mocap.py C:\Users\miste\Desktop\1v1p --work internet ←[34m[run] python3 apps/calibration/create_blank_camera.py C:\Users\miste\Desktop\1v1p←[0m [run] python3 apps/fit/fit.py --cfg_model config/model/smpl.yml --cfg_data config/data/multivideo.yml --cfg_exp config/fit/1v1p.yml --opt_data "args.path" "C:\Users\miste\Desktop\1v1p" "args.out" "C:\Users\miste\Desktop\1v1p\output-smpl-3d" "args.camera" "C:\Users\miste\Desktop\1v1p" --opt_exp "args.stages.joints.loss.k2d.weight" "100." "args.stages.joints.loss.k2d.args.norm" "gm" Traceback (most recent call last): File "apps/demo/mocap.py", line 394, in <module> workflow(args.work, args) File "apps/demo/mocap.py", line 340, in workflow append_mocap_flags(path, output, cfg_data, cfg_model, cfg_exp, workflow.fit, args) File "apps/demo/mocap.py", line 277, in append_mocap_flags filenames = os.listdir(outdir) FileNotFoundError: [WinError 3] Le chemin d’accès spécifié est introuvable: 'C:\\Users\\miste\\Desktop\\1v1p\\output-smpl-3d\\smplmesh'

    My data folder contains : bodymodels, models and smplx. In bodymodels I have : manov1.2, SMPL_python_v.1.1.0 and smplhv1.2 In models : smpl_mean_params.npz, spin_checkpoint.pt, yolov4.weights And in smplx it's the regular files with smpl, smplh and smplx.

    I used this command line to install pytorch ... : conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.1 -c pytorch

    But after that I still have the issue and I really don't know what to do

    BTW : I use python 3.7 and cuda 10.1 inside a conda environment on a RTX 2060 6GB

    opened by WilliamH07 0
  • No such file or directory: 'mirror-youtube-clip-videoonly/images

    No such file or directory: 'mirror-youtube-clip-videoonly/images

    Hi, I'm trying to run quick start samples (mirror-youtube-clip-videoonly). It took me about 3 days to resolve runtime errors. Until I stopped in following error. I had no more idea to correct it:

    [run] python3 apps/preprocess/extract_image.py /content/mirror-youtube-clip-videoonly
    [run] python3 apps/calibration/create_blank_camera.py /content/mirror-youtube-clip-videoonly
    Traceback (most recent call last):
      File "/content/EasyMocap/apps/calibration/create_blank_camera.py", line 24, in <module>
        subs = sorted(os.listdir(join(args.path, 'images')))
    FileNotFoundError: [Errno 2] No such file or directory: '/content/mirror-youtube-clip-videoonly/images'
    [Config] merge from parent file: config/data/multivideo.yml
    [run] python3 apps/fit/fit.py --cfg_model config/model/smpl.yml --cfg_data config/data/multivideo-mirror.yml --cfg_exp config/fit/1v1p-mirror-direct.yml --opt_data "args.path" "/content/mirror-youtube-clip-videoonly" "args.out" "/content/mirror-youtube-clip-videoonly/output-smpl-3d" "args.camera" "/content/mirror-youtube-clip-videoonly" "args.writer.render.scale" "0.5"
    [Config] merge from parent file: config/data/multivideo.yml
    Key is not in the template: args.out
    -> [Loading config/data/multivideo-mirror.yml]:  25.9ms
    Traceback (most recent call last):
      File "/content/EasyMocap/apps/fit/fit.py", line 33, in <module>
        dataset = load_object(cfg_data.module, cfg_data.args)
      File "/content/EasyMocap/easymocap/config/baseconfig.py", line 67, in load_object
        obj = getattr(module, name)(**extra_args, **module_args)
      File "/content/EasyMocap/easymocap/datasets/base.py", line 511, in __init__
        super().__init__(**kwargs)
      File "/content/EasyMocap/easymocap/datasets/base.py", line 399, in __init__
        super().__init__(**kwargs)
      File "/content/EasyMocap/easymocap/datasets/base.py", line 222, in __init__
        self.subs = self.check_subs(path, subs)
      File "/content/EasyMocap/easymocap/datasets/base.py", line 256, in check_subs
        subs = sorted(os.listdir(join(path, self.image_args['root'])))
    FileNotFoundError: [Errno 2] No such file or directory: '/content/mirror-youtube-clip-videoonly/images'
    Traceback (most recent call last):
      File "/content/EasyMocap/apps/demo/mocap.py", line 394, in <module>
        workflow(args.work, args)
      File "/content/EasyMocap/apps/demo/mocap.py", line 340, in workflow
        append_mocap_flags(path, output, cfg_data, cfg_model, cfg_exp, workflow.fit, args)
      File "/content/EasyMocap/apps/demo/mocap.py", line 277, in append_mocap_flags
        filenames = os.listdir(outdir)
    FileNotFoundError: [Errno 2] No such file or directory: '/content/mirror-youtube-clip-videoonly/output-smpl-3d/smplmesh'
    

    My runtime environment is a google colab notebook, because I had several problems with Pickle in my ubuntu which leads me to *.pkl file loading issue. I will open up another issue about the pickle error.

    Please help me on this 😢 I supposed that it takes me some hours but my manager is angry by now (he is waiting for my results 🤦‍♂️).

    tnx

    opened by Reza-Noei 1
  • triangulation error says value error

    triangulation error says value error

    Traceback (most recent call last): File "C:\easymocap\apps\demo\mv1p.py", line 116, in mv1pmf_skel(dataset, check_repro=True, args=args) File "C:\easymocap\apps\demo\mv1p.py", line 43, in mv1pmf_skel dataset.vis_detections(images, annots, nf, sub_vis=args.sub_vis) File "c:\easymocap\easymocap\dataset\mv1pmf.py", line 58, in vis_detections return super().vis_detections(images, lDetections, nf, sub_vis=sub_vis) File "c:\easymocap\easymocap\dataset\base.py", line 512, in vis_detections valid_idx = [self.cams.index(i) for i in sub_vis] File "c:\easymocap\easymocap\dataset\base.py", line 512, in valid_idx = [self.cams.index(i) for i in sub_vis] ValueError: '1' is not in list

    i get this error when i enter this code to triangulate python apps/demo/mv1p.py 0_input/project --out 1_output/project --vis_det --vis_repro --undis --sub_vis 1 2 3 4 --vis_smpl

    opened by josephajtaX 0
  • KeyError: 'nJoints' when running quickstart example

    KeyError: 'nJoints' when running quickstart example

    when I run the example code, get error as follows: python3 apps/demo/mv1p.py ./zju-ls-feng --out ./zju-ls-feng/output/manor --vis_det --vis_repro --undis --sub_vis 1 7 13 19 --body handr --model manor --gender male --vis_smpl

    Demo code for multiple views and one person:

    - Input : /data/code/EasyMocap/zju-ls-feng => 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
    - Output: /data/code/EasyMocap/zju-ls-feng/output/manol
    - Body  : manol=>male, handl
    

    triangulation: 0%| | 0/800 [00:00<?, ?it/s] Traceback (most recent call last): File "/data/code/EasyMocap/apps/demo/mv1p.py", line 116, in mv1pmf_skel(dataset, check_repro=True, args=args) File "/data/code/EasyMocap/apps/demo/mv1p.py", line 43, in mv1pmf_skel dataset.vis_detections(images, annots, nf, sub_vis=args.sub_vis) File "/data/code/EasyMocap/easymocap/dataset/mv1pmf.py", line 58, in vis_detections return super().vis_detections(images, lDetections, nf, sub_vis=sub_vis) File "/data/code/EasyMocap/easymocap/dataset/base.py", line 515, in vis_detections return self.writer.vis_keypoints2d_mv(images, lDetections, outname=outname, vis_id=True) File "/data/code/EasyMocap/easymocap/mytools/writer.py", line 51, in vis_keypoints2d_mv plot_keypoints(img, keypoints, pid=pid, config=self.config, use_limb_color=False, lw=2) File "/data/code/EasyMocap/easymocap/mytools/vis_base.py", line 138, in plot_keypoints for i in range(min(len(points), config['nJoints'])): KeyError: 'nJoints'

    opened by canghaiyunfan 0
Releases(v0.1)
  • v0.1(Mar 29, 2021)

    The basic version of EasyMocap contains:

    1. 3D reconstruction of SMPL/SMPL+H/SMPL-X model from multiple views
    2. Visualization of 3D skeletons and human meshes
    3. Conversion to bvh format
    4. Camera's extrinsic parameters calibration
    Source code(tar.gz)
    Source code(zip)
Owner
ZJU3DV
ZJU3DV is a research group of State Key Lab of CAD&CG, Zhejiang University, which maily focuses on the research of 3D computer vision, SLAM and AR.
ZJU3DV
Deep Learning Based Fasion Recommendation System for Ecommerce

Project Name: Fasion Recommendation System for Ecommerce A Deep learning based streamlit web app which can recommened you various types of fasion prod

BAPPY AHMED 13 Dec 13, 2022
Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Database

Python cx_Oracle Notebooks, 2022 The repository contains Jupyter notebooks showing best practices for using cx_Oracle, the Python DB API for Oracle Da

Christopher Jones 13 Dec 15, 2022
The materials used in the SaxonJS tutorial presented at Declarative Amsterdam, 2021

SaxonJS-Tutorial-2021, version 1.0.4 Last updated on 4 November, 2021. Table of contents Background Prerequisites Starting a web server Running a Java

Saxonica 11 Oct 23, 2022
A scikit-learn compatible neural network library that wraps PyTorch

A scikit-learn compatible neural network library that wraps PyTorch. Resources Documentation Source Code Examples To see more elaborate examples, look

4.9k Dec 31, 2022
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
Source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree.

self-driving-car In this repository I will share the source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree. Hope this might

Andrea Palazzi 2.4k Dec 29, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis"

StrengthNet Implementation of "StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis" https://arxiv.org/abs/2110

RuiLiu 65 Dec 20, 2022
HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method)

Methods HMLET (Hybrid-Method-of-Linear-and-non-linEar-collaborative-filTering-method) Dynamically selecting the best propagation method for each node

Yong 7 Dec 18, 2022
Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiati

8 Aug 28, 2022
This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities

MLOps with Vertex AI This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities. The ex

Google Cloud Platform 238 Dec 21, 2022
Just Go with the Flow: Self-Supervised Scene Flow Estimation

Just Go with the Flow: Self-Supervised Scene Flow Estimation Code release for the paper Just Go with the Flow: Self-Supervised Scene Flow Estimation,

Himangi Mittal 50 Nov 22, 2022
Half Instance Normalization Network for Image Restoration

HINet Half Instance Normalization Network for Image Restoration, based on https://github.com/megvii-model/HINet. Dependencies NumPy PyTorch, preferabl

Holy Wu 4 Jun 06, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
Towards Interpretable Deep Metric Learning with Structural Matching

DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

Wenliang Zhao 75 Nov 11, 2022
SGoLAM - Simultaneous Goal Localization and Mapping

SGoLAM - Simultaneous Goal Localization and Mapping PyTorch implementation of the MultiON runner-up entry, SGoLAM: Simultaneous Goal Localization and

10 Jan 05, 2023
Image super-resolution (SR) is a fast-moving field with novel architectures attracting the spotlight

Revisiting RCAN: Improved Training for Image Super-Resolution Introduction Image super-resolution (SR) is a fast-moving field with novel architectures

Zudi Lin 76 Dec 01, 2022
AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AI assistant built in python.the features are it can display time,say weather,open-google,youtube,instagram.

AK-Shanmugananthan 1 Nov 29, 2021
A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation This repository contains the source code of the paper A Differentiable

Bernardo Aceituno 2 May 05, 2022
Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images

BlockGAN Code release for BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images BlockGAN: Learning 3D Object-aware Scene Rep

41 May 18, 2022