The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Overview

Face Alignment in Full Pose Range: A 3D Total Solution

License: MIT HitCount stars GitHub issues GitHub repo size

By Jianzhu Guo.

obama

[Updates]

  • 2020.8.30: The pre-trained model and code of ECCV-20 are made public on 3DDFA_V2, the copyright is explained by Jianzhu Guo and the CBSR group.
  • 2020.8.2: Update a simple c++ port of this project.
  • 2020.7.3: The extended work Towards Fast, Accurate and Stable 3D Dense Face Alignment is accepted by ECCV 2020. See my page for more details.
  • 2019.9.15: Some updates, see the commits for details.
  • 2019.6.17: Adding a video demo contributed by zjjMaiMai.
  • 2019.5.2: Evaluating inference speed on CPU with PyTorch v1.1.0, see here and speed_cpu.py.
  • 2019.4.27: A simple render pipeline running at ~25ms/frame (720p), see rendering.py for more details.
  • 2019.4.24: Providing the demo building of obama, see [email protected]/readme.md for more details.
  • 2019.3.28: Some updates.
  • 2018.12.23: Add several features: depth image estimation, PNCC, PAF feature and obj serialization. See dump_depth, dump_pncc, dump_paf, dump_obj options for more details.
  • 2018.12.2: Support landmark-free face cropping, see dlib_landmark option.
  • 2018.12.1: Refine code and add pose estimation feature, see utils/estimate_pose.py for more details.
  • 2018.11.17: Refine code and map the 3d vertex to original image space.
  • 2018.11.11: Update end-to-end inference pipeline: infer/serialize 3D face shape and 68 landmarks given one arbitrary image, please see readme.md below for more details.
  • 2018.10.4: Add Matlab face mesh rendering demo in visualize.
  • 2018.9.9: Add pre-process of face cropping in benchmark.

[Todo]

Introduction

This repo holds the pytorch improved version of the paper: Face Alignment in Full Pose Range: A 3D Total Solution. Several works beyond the original paper are added, including the real-time training, training strategies. Therefore, this repo is an improved version of the original work. As far, this repo releases the pre-trained first-stage pytorch models of MobileNet-V1 structure, the pre-processed training&testing dataset and codebase. Note that the inference time is about 0.27ms per image (input batch with 128 images as an input batch) on GeForce GTX TITAN X.

This repo will keep updating in my spare time, and any meaningful issues and PR are welcomed.

Several results on ALFW-2000 dataset (inferenced from model phase1_wpdc_vdc.pth.tar) are shown below.

Landmark 3D

Vertex 3D

Applications & Features

1. Face Alignment

dapeng

2. Face Reconstruction

demo

3. 3D Pose Estimation

tongliya

4. Depth Image Estimation

demo_depth

5. PNCC & PAF Features

demo_pncc_paf

Getting started

Requirements

  • PyTorch >= 0.4.1 (PyTorch v1.1.0 is tested successfully on macOS and Linux.)
  • Python >= 3.6 (Numpy, Scipy, Matplotlib)
  • Dlib (Dlib is optionally for face and landmarks detection. There is no need to use Dlib if you can provide face bouding bbox and landmarks. Besides, you can try the two-step inference strategy without initialized landmarks.)
  • OpenCV (Python version, for image IO operations.)
  • Cython (For accelerating depth and PNCC render.)
  • Platform: Linux or macOS (Windows is not tested.)
# installation structions
sudo pip3 install torch torchvision # for cpu version. more option to see https://pytorch.org
sudo pip3 install numpy scipy matplotlib
sudo pip3 install dlib==19.5.0 # 19.15+ version may cause conflict with pytorch in Linux, this may take several minutes. If 19.5 version raises errors, you may try 19.15+ version.
sudo pip3 install opencv-python
sudo pip3 install cython

In addition, I strongly recommend using Python3.6+ instead of older version for its better design.

Usage

  1. Clone this repo (this may take some time as it is a little big)

    git clone https://github.com/cleardusk/3DDFA.git  # or [email protected]:cleardusk/3DDFA.git
    cd 3DDFA
    

    Then, download dlib landmark pre-trained model in Google Drive or Baidu Yun, and put it into models directory. (To reduce this repo's size, I remove some large size binary files including this model, so you should download it : ) )

  2. Build cython module (just one line for building)

    cd utils/cython
    python3 setup.py build_ext -i
    

    This is for accelerating depth estimation and PNCC render since Python is too slow in for loop.

  3. Run the main.py with arbitrary image as input

    python3 main.py -f samples/test1.jpg
    

    If you can see these output log in terminal, you run it successfully.

    Dump tp samples/test1_0.ply
    Save 68 3d landmarks to samples/test1_0.txt
    Dump obj with sampled texture to samples/test1_0.obj
    Dump tp samples/test1_1.ply
    Save 68 3d landmarks to samples/test1_1.txt
    Dump obj with sampled texture to samples/test1_1.obj
    Dump to samples/test1_pose.jpg
    Dump to samples/test1_depth.png
    Dump to samples/test1_pncc.png
    Save visualization result to samples/test1_3DDFA.jpg
    

    Because test1.jpg has two faces, there are two .ply and .obj files (can be rendered by Meshlab or Microsoft 3D Builder) predicted. Depth, PNCC, PAF and pose estimation are all set true by default. Please run python3 main.py -h or review the code for more details.

    The 68 landmarks visualization result samples/test1_3DDFA.jpg and pose estimation result samples/test1_pose.jpg are shown below:

samples

samples

  1. Additional example

    python3 ./main.py -f samples/emma_input.jpg --bbox_init=two --dlib_bbox=false
    

samples

samples

Inference speed

CPU

Just run

python3 speed_cpu.py

On my MBP (i5-8259U CPU @ 2.30GHz on 13-inch MacBook Pro), based on PyTorch v1.1.0, with a single input, the running output is:

Inference speed: 14.50±0.11 ms

GPU

When input batch size is 128, the total inference time of MobileNet-V1 takes about 34.7ms. The average speed is about 0.27ms/pic.

Inference speed

Training details

The training scripts lie in training directory. The related resources are in below table.

Data Download Link Description
train.configs BaiduYun or Google Drive, 217M The directory containing 3DMM params and filelists of training dataset
train_aug_120x120.zip BaiduYun or Google Drive, 2.15G The cropped images of augmentation training dataset
test.data.zip BaiduYun or Google Drive, 151M The cropped images of AFLW and ALFW-2000-3D testset

After preparing the training dataset and configuration files, go into training directory and run the bash scripts to train. train_wpdc.sh, train_vdc.sh and train_pdc.sh are examples of training scripts. After configuring the training and testing sets, just run them for training. Take train_wpdc.sh for example as below:

#!/usr/bin/env bash

LOG_ALIAS=$1
LOG_DIR="logs"
mkdir -p ${LOG_DIR}

LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
#echo $LOG_FILE

./train.py --arch="mobilenet_1" \
    --start-epoch=1 \
    --loss=wpdc \
    --snapshot="snapshot/phase1_wpdc" \
    --param-fp-train='../train.configs/param_all_norm.pkl' \
    --param-fp-val='../train.configs/param_all_norm_val.pkl' \
    --warmup=5 \
    --opt-style=resample \
    --resample-num=132 \
    --batch-size=512 \
    --base-lr=0.02 \
    --epochs=50 \
    --milestones=30,40 \
    --print-freq=50 \
    --devices-id=0,1 \
    --workers=8 \
    --filelists-train="../train.configs/train_aug_120x120.list.train" \
    --filelists-val="../train.configs/train_aug_120x120.list.val" \
    --root="/path/to//train_aug_120x120" \
    --log-file="${LOG_FILE}"

The specific training parameters are all presented in bash scripts, including learning rate, mini-batch size, epochs and so on.

Evaluation

First, you should download the cropped testset ALFW and ALFW-2000-3D in test.data.zip, then unzip it and put it in the root directory. Next, run the benchmark code by providing trained model path. I have already provided five pre-trained models in models directory (seen in below table). These models are trained using different loss in the first stage. The model size is about 13M due to the high efficiency of MobileNet-V1 structure.

python3 ./benchmark.py -c models/phase1_wpdc_vdc.pth.tar

The performances of pre-trained models are shown below. In the first stage, the effectiveness of different loss is in order: WPDC > VDC > PDC. While the strategy using VDC to finetune WPDC achieves the best result.

Model AFLW (21 pts) AFLW 2000-3D (68 pts) Download Link
phase1_pdc.pth.tar 6.956±0.981 5.644±1.323 Baidu Yun or Google Drive
phase1_vdc.pth.tar 6.717±0.924 5.030±1.044 Baidu Yun or Google Drive
phase1_wpdc.pth.tar 6.348±0.929 4.759±0.996 Baidu Yun or Google Drive
phase1_wpdc_vdc.pth.tar 5.401±0.754 4.252±0.976 In this repo.

About the performance

Believe me that the framework of this repo can achieve better performance than PRNet without increasing any computation budget. Related work is under review and code will be released upon acceptance.

FQA

  1. Face bounding box initialization

    The original paper shows that using detected bounding box instead of ground truth box will cause a little performance drop. Thus the current face cropping method is robustest. Quantitative results are shown in below table.

bounding box

  1. Face reconstruction

    The texture of non-visible area is distorted due to self-occlusion, therefore the non-visible face region may appear strange (a little horrible).

  2. About shape and expression parameters clipping

    The parameters clipping accelerates the training and reconstruction, but degrades the accuracy especially the details like closing eyes. Below is an image, with parameters dimension 40+10, 60+29 and 199+29 (the original one). Compared to shape, expression clipping has more effect on reconstruction accuracy when emotion is involved. Therefore, you can choose a trade-off between the speed/parameter-size and the accuracy. A recommendation of clipping trade-off is 60+29.

bounding box

Acknowledgement

Thanks for your interest in this repo. If your work or research benefits from this repo, star it 😃

Welcome to focus on my 3D face related works: MeGlass and Face Anti-Spoofing.

Citation

If your work benefits from this repo, please cite three bibs below.

@misc{3ddfa_cleardusk,
  author =       {Guo, Jianzhu and Zhu, Xiangyu and Lei, Zhen},
  title =        {3DDFA},
  howpublished = {\url{https://github.com/cleardusk/3DDFA}},
  year =         {2018}
}

@inproceedings{guo2020towards,
  title=        {Towards Fast, Accurate and Stable 3D Dense Face Alignment},
  author=       {Guo, Jianzhu and Zhu, Xiangyu and Yang, Yang and Yang, Fan and Lei, Zhen and Li, Stan Z},
  booktitle=    {Proceedings of the European Conference on Computer Vision (ECCV)},
  year=         {2020}
}

@article{zhu2017face,
  title=      {Face alignment in full pose range: A 3d total solution},
  author=     {Zhu, Xiangyu and Liu, Xiaoming and Lei, Zhen and Li, Stan Z},
  journal=    {IEEE transactions on pattern analysis and machine intelligence},
  year=       {2017},
  publisher=  {IEEE}
}

Contact

Jianzhu Guo (郭建珠) [Homepage, Google Scholar]: [email protected] or [email protected].

Comments
  • Training params

    Training params

    Can you describe the parameters format? I noticed that the param_all_norm.pkl file contains 62 extracted parameters of each image from you dataset. By the FaceProfiling code (http://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/HPEN/main.htm) I generated a dataset containing my own profiles, but I didn't understand how to create the pkl file using FaceProfiling output variables.

    future 
    opened by joaootavio93 17
  • mesh disorder

    mesh disorder

    hi. I met a problem when testing your program, when i render the mat it looks like this. QQ图片20190506155235 but if render the mat you provided , they works fine. Hope you could help me. thanks

    opened by snowzhangy 16
  • how to generate two styles in obama_three_styles.gif

    how to generate two styles in obama_three_styles.gif

    Hey @cleardusk I have a question regarding the Gif image obama_three_styles.gif, Can you please explain how you generated two stlyes overlays in this GIF, I understood about the dlib facial points. I am kind of new to this topic. But I couldn't grasp how you are generating the fist two styles. Are they depth map and PNCC represented in different color ?

    Thanks in advance

    opened by gara-MI 10
  • Regressor from 3D dense vertices to 3D keypoints

    Regressor from 3D dense vertices to 3D keypoints

    Hi! Thanks for your nice work!

    I am wondering how to obtain the indices of 68 keypoints from original 5w+ vertices

    I want to get the indices of 106 keypoints, is there any solution?

    Thanks!

    opened by FishWoWater 8
  • obama@demo

    [email protected]

    Hello This is a great work and thank you for sharing your code. But i have some problem understanding it. There is a frame rendered with 3D model in [email protected] file. I looked in the code but there is no documentation on how u did this. I know how to calculate R,T and the 3DMM model but how can i render this 3DMM on face? how should i align 3DMM and face? I appreciate if you explain it theoretically or just mention the code.

    Thank you

    opened by Asiyeh-Bahaloo 8
  • mesh_core_cython.cpp:609:10: fatal error: 'ios' file not found

    mesh_core_cython.cpp:609:10: fatal error: 'ios' file not found

    When I use the command "python3 setup.py build_ext -I" , the error was threw as follows:

    warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found] In file included from mesh_core_cython.cpp:607: In file included from /anaconda3/envs/py36/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4: In file included from /anaconda3/envs/py36/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:12: In file included from /anaconda3/envs/py36/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1824: /anaconda3/envs/py36/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] #warning "Using deprecated NumPy API, disable it with "
    ^ mesh_core_cython.cpp:609:10: fatal error: 'ios' file not found #include "ios" ^~~~~ 2 warnings and 1 error generated.

    There is little methods to solve this question. Help me, please.

    macOS 
    opened by Watebear 8
  • adding texture to 3d model

    adding texture to 3d model

    Hi,

    In your documentation, in Application -> 2. Face Reconstruction, there is a 3D model and then the texture is added to that. Is this also addressed in your code? What I understood from the .ply file is that it only contains shape, and not texture and light. Right?

    Thanks again for your wonderful work!

    opened by cnaaq 8
  • Implementation is different with the paper?

    Implementation is different with the paper?

    Hi, thank you for sharing you work. I read the TPAMI paper, but i didn't found the two stream architecture in this repositories. The paper propose Pose Adaptive Convolution and Projected Normalized Coordinate Code which are not contained in this repositories either. Well, from the main.py, it just crops face region in image and feeds to mobilenet. I wonder why using simple regression strategy can produce promising results as you provided? Is data augmentation matters?

    opened by niujinshuchong 8
  • How to fit the extracted face features in a full head mesh?

    How to fit the extracted face features in a full head mesh?

    Hi, Thanks for sharing this awesome work. I wanted to make a full face 3D model, in which I can later fit some hair features. Also can this output mesh be attached to a separate 3D mesh? Thanks in advance!.

    opened by AIdeveloper-oz 7
  • license for using depth image

    license for using depth image

    Dear @cleardusk ,

    Thanks for sharing your great work.

    I notice that for the project, it is under MIT license.
    If I wish to use the depth image, it also involves the 3dmm rendering part. Is this part also under MIT license?

    Thanks and Regards,

    opened by liwei46 7
  • about face model data

    about face model data

    Hi,jianzhu, how to obtain Model_Expression.mat in the face profiling code? is it the part of BFM model? I am looking forward to your reply, thank you!

    opened by niannianmeng 7
  • Create a more accurate 3D model

    Create a more accurate 3D model

    Hi, thank you for this beautiful work. I tried this model on several images. Why are the nasal structures of all exits the same? I have a bulge and deviation on my nose, but the nose model makes my nose look very beautiful. It's like I've had a nose job. Is there a way to improve the work and make more realistic models? I think the model is not working correctly in determining the depth of the image. Therefore, it does not recognize the protrusion of the nose.

    opened by rezalahmi 0
  • License of 300-W-LP and AFLW2000-3D

    License of 300-W-LP and AFLW2000-3D

    Hi @cleardusk,

    The project is under the MIT license but I would like to know the licenses of 300-W-LP and AFLW2000-3D. Are they the same as 300-W and AFLW?

    opened by MartFire 0
  • RuntimeError: Unable to open models/shape_predictor_68_face_landmarks.dat

    RuntimeError: Unable to open models/shape_predictor_68_face_landmarks.dat

    When running main.py, the following bug occurs: RuntimeError: Unable to open models/shape_predictor_68_face_landmarks.dat

    platform:

    • Linux

    There does not appear to be a shape_predictor_68_face_landmarks.dat file in the Models folder. How to solve such a problem?help!!!

    opened by Aweiiss 2
  • Project dependencies have API risk issues

    Project dependencies have API risk issues

    Hi, In 3DDFA, inappropriate dependency versioning constraints can cause risks.

    Below are the dependencies and version constraints that the project is using

    torch>=0.4.1
    torchvision>=0.2.1
    numpy>=1.15.4
    scipy>=1.1.0
    matplotlib==3.0.2
    dlib==19.5.0
    opencv-python>=3.4.3.18
    

    The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

    After further analysis, in this project, The version constraint of dependency scipy can be changed to >=0.8.0,<=1.2.3.

    The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

    The invocation of the current project includes all the following methods.

    The calling methods from the scipy
    io.imread
    
    The calling methods from the all methods
    img_step2.transform.unsqueeze
    _predict_vertices
    crop_img
    gen_img_paf
    data_time.update
    point_2d.reshape
    cv2.imshow
    input_.data.clone
    rect_fp.open.read
    torch.abs
    imageio.imread
    self.dw5_6
    np.float32.BG.astype.copy
    plt.figure
    int
    cget_depths_image
    _to_tensor
    pickle.load
    rects.append
    min
    ax.axis
    ncc
    rect.top
    np.abs
    np.zeros
    torch.randperm
    logging.info
    k.replace
    asin
    plt.savefig
    logging.StreamHandler
    self.dw2_1
    np.int32
    dlib.rectangle
    index.index.index.torch.cat.view
    cpncc
    data.DataLoader
    checkpoint.keys
    obama_demo
    build_camera_box
    adjust_learning_rate
    alpha_expg.self.w_exp.alpha_shpg.self.w_shp.self.u.view
    self.reset
    torch.no_grad
    param.squeeze.cpu.numpy.flatten.astype
    pickle.dump
    RenderPipeline
    cv2.line
    _tensor_to_cuda
    np.minimum
    model
    aflw
    m.weight.data.normal_
    tuple
    gen_offsets
    self.conv1
    np.savetxt
    args.milestones.split
    np.dot
    device_ids.model.nn.DataParallel.cuda
    p_.reshape
    torch.save
    transform
    param.view
    torch.randn
    _parse_param_batch
    dlib.get_frontal_face_detector
    p_.view
    ana_aflw
    x.size
    batch_time.update
    input.cuda
    plot_close
    self.VDCLoss.super.__init__
    self.mean.tensor.sub_.div_
    header.format
    io.imread
    img.transform.unsqueeze
    torch.norm
    torch.optim.SGD
    mesh_core_cython.get_normal
    vertices_lst.append
    self.conv_sep
    calc_nme_alfw
    self._calc_weights_resample
    np.float32.colors.astype.copy
    plt.imshow
    rect.left
    v.lower
    calc_nme_alfw2000
    benchmark_alfw_params
    cv2.imread
    ax.plot3D
    mesh_core_cython.render_colors_core
    point_3d_homo.dot
    self.forward_all
    depths_img.squeeze
    time.time
    nn.Conv2d
    WPDCLoss
    _load
    self.MobileNet.super.__init__
    self.bn_sep
    args.opt_style.WPDCLoss.cuda
    logging.FileHandler
    np.array
    losses.update
    face_regressor
    type
    nn.DataParallel
    param.squeeze.cpu
    np.zeros_like
    draw_landmarks
    aflw2000
    self.WPDCLoss.super.__init__
    last_frame_pts.append
    nn.AdaptiveAvgPool2d
    self.dw6
    alpha_expg.w_exp_base.alpha_shpg.w_shp_base.u_base.view
    NormalizeGjz
    maxes.view
    dlib.shape_predictor
    rect.frame.face_regressor.parts
    face_detector
    super
    ax.set_xticklabels
    save_checkpoint
    gen_3d_vertex
    transforms.Compose
    np.linalg.norm
    loss.item
    _to_ctype
    enumerate
    torch.zeros_like
    plt.tight_layout
    osp.join
    cos
    weights.max
    vertices.np.max.reshape
    plt.figaspect
    ax.set_zticklabels
    max
    tensor.sub_
    sys.path.append
    app
    atan2
    parser.parse_args
    model.cuda
    outputs.append
    nme_list.append
    render_img.astype
    sio.savemat
    torch.tensor
    vars
    get_colors
    DDFATestDataset
    np.float.point_3d.np.array.reshape
    sio.loadmat
    ax.imshow
    model.parameters
    N.alpha_exp.w_exp_base.alpha_shp.w_shp_base.u_base.view.permute
    param.reshape
    self.resample_num.self.w_shp_length.torch.randperm.reshape
    img_fp.imageio.imread.astype
    colors.astype
    loss.mean
    pnccs_img.squeeze
    inputs.cuda
    imgs.append
    calc_hypotenuse
    arr.copy
    self.dw5_3
    kwargs.get
    np.int32.triangles.astype.copy
    self.dw3_2
    convert_type
    filelists.Path.read_text
    cv2.waitKey
    image.copy
    np.min
    os.system
    test
    _benchmark_aflw
    imageio.mimwrite
    filelists.open.read
    ToTensorGjz
    np.max
    setup
    vc.read
    meta.get
    mode.lower
    dump_to_ply
    plt.subplots_adjust
    alpha_exp.w_exp_base.alpha_shp.w_shp_base.u_base.reshape
    ax.set_yticklabels
    np.ones
    self.reconstruct_and_parse
    reconstruct_paf_anchor
    np.concatenate
    sorted
    model.train
    obj_name.split
    np.where
    filelists.Path.read_text.strip.split
    self.bn1
    nn.Linear
    range
    cv2.polylines
    model.eval
    P2sRt
    param.squeeze.cpu.numpy.flatten
    imageio.imwrite
    MobileNet
    self.fc
    i.output.cpu
    parse_pose
    ax.plot
    argparse.ArgumentParser
    np.sum
    cv2.resize
    img.float
    x.cuda
    print_args
    os.mkdir
    filelists.open.read.strip.split
    nn.MSELoss
    _numpy_to_tensor
    self.transform
    predict_68pts
    alpha_exp.self.w_exp.alpha_shp.self.w_shp.self.u.view
    open
    filelists.open.read.strip
    rect_fp.open.read.strip
    pts_res.append
    parse_args
    render.crender_colors
    torch.load
    x.cpu.numpy
    np.mean
    osp.dirname
    _benchmark_aflw2000
    t3d.reshape
    l.split
    self.forward_resample
    np.float32.vertices.astype.copy
    timeit.repeat
    norm_vertices
    self.dw5_4
    np.maximum
    resample_num.self.w_shp_length.torch.randperm.reshape
    filename.rfind
    arch.mobilenet_v1.getattr
    self.avgpool
    param.squeeze
    self.conv_dw
    np.floor
    self.dw4_2
    main
    i.output.cpu.numpy.flatten
    rect.img_ori.face_regressor.parts
    kpt.np.round.astype
    imageio.imread.astype
    is_point_in_tri
    map
    alpha_exp.w_exp_base.alpha_shp.w_shp_base.u_base.view
    cv2.imwrite
    losses.append
    img_fp.replace
    np.std
    _load_cpu
    criterion
    numpy.get_include
    dump_vertex
    Extension
    plot_pose_box
    vertices.min
    crender_colors
    self.dw3_1
    rect.bottom
    index0.index1.img_crop.reshape.transpose
    get_suffix
    print
    cv2.VideoCapture
    glob
    np.round
    _dump
    poses.append
    model.load_state_dict
    paf.get
    N.alpha_expg.w_exp_base.alpha_shpg.w_shp_base.u_base.view.permute
    _get_suffix
    np.bitwise_and
    to
    os.path.exists
    alpha_exp.w_exp_filter.alpha_shp.w_filter.u_filter.reshape
    alpha_exp.w_exp.alpha_shp.w_shp.u.reshape
    model_dict.keys
    getattr
    args.devices_id.model.nn.DataParallel.cuda
    args.loss.lower
    block
    torch.from_numpy
    Path
    np.ceil
    np.load
    list
    self._target_loader
    N.alpha_expg.self.w_exp.alpha_shpg.self.w_shp.self.u.view.permute
    predict_dense
    osp.split
    self.dw5_5
    model.astype
    extract_param
    plt.plot
    mkdir
    N.alpha_exp.self.w_exp.alpha_shp.self.w_shp.self.u.view.permute
    plt.close
    benchmark_aflw2000_params
    ana_alfw2000
    join
    triangles.copy
    index0.index1.img_crop.reshape
    nn.PReLU
    triangles._to_ctype.astype
    VDCLoss
    f.write
    ax.scatter
    np.cross
    triangles.astype
    parse_roi_box_from_landmark
    sqrt
    anchor.np.round.astype
    keypoints.T.flatten
    BG.astype
    rect_fp.open.read.strip.split
    filelists.Path.read_text.strip
    input.size
    keypoints.u.reshape
    _norm
    self.bn_dw
    train
    args.size_average.nn.MSELoss.cuda
    args.devices_id.split
    optimizer.zero_grad
    loss.backward
    osp.basename
    model_path.replace
    x.view
    reconstruct_vertex
    args.opt_style.VDCLoss.cuda
    args.arch.mobilenet_v1.getattr
    fn.replace
    parser.add_argument
    write_obj_with_colors
    DDFADataset
    kc.replace
    vertices.max
    torch.mean
    os.path.isdir
    cv2.circle
    vertices.copy
    rect.right
    optimizer.step
    validate
    fig.add_subplot
    img_loader
    Ps.append
    m.bias.data.zero_
    target.cuda
    np.hstack
    torch.onnx.export
    self.dw5_1
    format
    param.squeeze.cpu.numpy
    nn.ReLU
    isinstance
    Exception
    self.dw5_2
    _parse_param
    osp.realpath
    convert_to_ori
    self.dw2_2
    torch.cat
    str
    fp._load.torch.from_numpy.cuda
    logging.basicConfig
    m.weight.data.fill_
    self.DepthWiseBlock.super.__init__
    render_colors
    make_abs_path
    vertices.astype
    math.sqrt
    self.img_loader
    AverageMeter
    np.sqrt
    self.relu
    vertices.np.min.reshape
    len
    np.save
    np.clip
    DataLoader
    benchmark_pipeline
    plt.axis
    x.cpu
    argparse.ArgumentTypeError
    checkpoint_fp.replace
    matrix2angle
    plt.show
    vertices.np.round.astype
    np.power
    model.state_dict
    x.split.replace
    x.split
    i.output.cpu.numpy
    target_.data.clone
    round
    pic.transpose
    u_exp.u_shp.astype
    parse_roi_box_from_bbox
    index.index.index.torch.cat.view.cuda
    dlib.rectangles
    point_3d.append
    self.dw4_1
    osp.exists
    ax.view_init
    nn.BatchNorm2d
    self.modules
    args.resume.Path.is_file
    torch.cuda.set_device
    

    Could please help me check this issue? May I pull a request to fix it? Thank you very much.

    opened by PyDeps 0
  • Hi! cleardusk, I ran into a bug,

    Hi! cleardusk, I ran into a bug, "Mex_ZBufferC.mexw64 无效: 找不到指定的模块" . My environment is Win10 ,and Matlab version is 2020a. I want to know if there is some document missing or some other reason. Looking forward to your reply

    Before proposing this issue, plz search the existed issues by your keywords first.

    Describe the bug

    To Reproduce

    Expected behavior

    Screenshots

    Platform:

    • macOS, Linux, or Windows

    Additional context Add any other context about the problem here.

    opened by lizhenqi111 1
Releases(v0.1)
Owner
Jianzhu Guo
Working on 3D face and related fields.
Jianzhu Guo
This is a project based on ConvNets used to identify whether a road is clean or dirty. We have used MobileNet as our base architecture and the weights are based on imagenet.

PROJECT TITLE: CLEAN/DIRTY ROAD DETECTION USING TRANSFER LEARNING Description: This is a project based on ConvNets used to identify whether a road is

Faizal Karim 3 Nov 06, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
A library of multi-agent reinforcement learning components and systems

Mava: a research framework for distributed multi-agent reinforcement learning Table of Contents Overview Getting Started Supported Environments System

InstaDeep Ltd 463 Dec 23, 2022
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Facebook Research 462 Jan 03, 2023
This repository contains the code for the ICCV 2019 paper "Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics"

Occupancy Flow This repository contains the code for the project Occupancy Flow - 4D Reconstruction by Learning Particle Dynamics. You can find detail

189 Dec 29, 2022
The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Generative Deep Learning Teaching Machines to paint, write, compose and play The official code repository for examples in the O'Reilly book 'Generativ

David Foster 1.3k Dec 29, 2022
Pytorch implementation of VAEs for heterogeneous likelihoods.

Heterogeneous VAEs Beware: This repository is under construction 🛠️ Pytorch implementation of different VAE models to model heterogeneous data. Here,

Adrián Javaloy 35 Nov 29, 2022
Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks.

ConvNeXt-TF This repository provides TensorFlow / Keras implementations of different ConvNeXt [1] variants. It also provides the TensorFlow / Keras mo

Sayak Paul 87 Dec 06, 2022
Recurrent Scale Approximation (RSA) for Object Detection

Recurrent Scale Approximation (RSA) for Object Detection Codebase for Recurrent Scale Approximation for Object Detection in CNN published at ICCV 2017

Yu Liu (Louis) 239 Dec 28, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Website | ArXiv | Get Start | Video PIRenderer The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic

Ren Yurui 261 Jan 09, 2023
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
Sample code from the Neural Networks from Scratch book.

Neural Networks from Scratch (NNFS) book code Code from the NNFS book (https://nnfs.io) separated by chapter.

Harrison 172 Dec 31, 2022
The official implementation for ACL 2021 "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval".

Code for "Challenges in Information Seeking QA: Unanswerable Questions and Paragraph Retrieval" (ACL 2021, Long) This is the repository for baseline m

Akari Asai 25 Oct 30, 2022
Pull sensitive data from users on windows including discord tokens and chrome data.

⭐ For a 🍪 Pegasus Pull sensitive data from users on windows including discord tokens and chrome data. Features 🟩 Discord tokens 🟩 Geolocation data

Addi 44 Dec 31, 2022
Neural Scene Flow Fields using pytorch-lightning, with potential improvements

nsff_pl Neural Scene Flow Fields using pytorch-lightning. This repo reimplements the NSFF idea, but modifies several operations based on observation o

AI葵 178 Dec 21, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022
Accelerating BERT Inference for Sequence Labeling via Early-Exit

Sequence-Labeling-Early-Exit Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit Requirement: Please refer to re

李孝男 23 Oct 14, 2022
Implementation for HFGI: High-Fidelity GAN Inversion for Image Attribute Editing

HFGI: High-Fidelity GAN Inversion for Image Attribute Editing High-Fidelity GAN Inversion for Image Attribute Editing Update: We released the inferenc

Tengfei Wang 371 Dec 30, 2022
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 06, 2023
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.

Nerdy Rodent 2.3k Jan 04, 2023