CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Related tags

Deep LearningCPF
Overview

Contact Potential Field

This repo contains model, demo, and test codes of our paper: CPF: Learning a Contact Potential Field to Model the Hand-object Interaction

Guide to the Demo

1. Get our code:

$ git clone --recursive https://github.com/lixiny/CPF.git
$ cd CPF

2. Set up your new environment:

$ conda env create -f environment.yaml
$ conda activate cpf

3. Download assets files and put it in assets folder.

Download the MANO model files from official MANO website, and put it into assets/mano. We currently only use the MANO_RIGHT.pkl

Now your assets folder should look like this:

.
├── anchor/
│   ├── anchor_mapping_path.pkl
│   ├── anchor_weight.txt
│   ├── face_vertex_idx.txt
│   └── merged_vertex_assignment.txt
├── closed_hand/
│   └── hand_mesh_close.obj
├── fhbhands_fits/
│   ├── Subject_1/
│   │   ├── ...
│   ├── Subject_2/
|   ├── ...
├── hand_palm_full.txt
└── mano/
    ├── fhb_skel_centeridx9.pkl
    ├── info.txt
    ├── LICENSE.txt
    └── MANO_RIGHT.pkl

4. Download Dataset

First-Person Hand Action Benchmark (fhb)

Download and unzip the First-Person Hand Action Benchmark dataset following the official instructions to the data/fhbhands folder If everything is correct, your data/fhbhands should look like this:

.
├── action_object_info.txt
├── action_sequences_normalized/
├── change_log.txt
├── data_split_action_recognition.txt
├── file_system.jpg
├── Hand_pose_annotation_v1/
├── Object_6D_pose_annotation_v1_1/
├── Object_models/
├── Subjects_info/
├── Video_files/
├── Video_files_480/ # Optionally

Optionally, resize the images (speeds up training !) based on the handobjectconsist/reduce_fphab.py.

$ python reduce_fphab.py

Download our fhbhands_supp and place it at data/fhbhands_supp:

Download our fhbhands_example and place it at data/fhbhands_example. This fhbhands_example contains 10 samples that are designed to demonstrate our pipeline.

├── fhbhands/
├── fhbhands_supp/
│   ├── Object_models/
│   └── Object_models_binvox/
├── fhbhands_example/
│   ├── annotations/
│   ├── images/
│   ├── object_models/
│   └── sample_list.txt

HO3D

Download and unzip the HO3D dataset following the official instructions to the data/HO3D folder. if everything is correct, the HO3D & YCB folder in your data should look like this:

data/
├── HO3D/
│   ├── evaluation/
│   ├── evaluation.txt
│   ├── train/
│   └── train.txt
├── YCB_models/
│   ├── 002_master_chef_can/
│   ├── ...

Download our YCB_models_supp and place it at data/YCB_models_supp

Now the data folder should have a root structure like:

data/
├── fhbhands/
├── fhbhands_supp/
├── fhbhands_example/
├── HO3D/
├── YCB_models/
├── YCB_models_supp/

5. Download pre-trained checkpoints

download our pre-trained CPF_checkpoints, unzip it at the CPF_checkpoints folder:

CPF_checkpoints/
├── honet/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/
├── picr/
│   ├── fhb/
│   ├── ho3dofficial/
│   └── ho3dv1/

6. Launch visualization

We create a FHBExample dataset in hocontact/hodatasets/fhb_example.py that only contains 10 samples to demonstrate our pipeline. Notice: this demo requires active screen for visualizing. Press q in the "runtime hand" window to start fitting.

$ python training/run_demo.py \
    --gpu 0 \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar \
    --honet_mano_fhb_hand

7. Test on full dataset (FHB, HO3D v1/v2)

We provide shell srcipts to test on the full dataset to approximately reproduce our results.

FHB

dump the results of HoNet and PiCR:

$ python training/dumppicr_dist.py \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12355 \
    --exp_keyword fhb \
    --train_datasets fhb \
    --train_splits train \
    --val_dataset fhb \
    --val_split test \
    --split_mode actions \
    --batch_size 8 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr \
    --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar

and reload the GeO optimizer:

# setting 1: hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand

# setting 2: hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3 python training/optimize.py \
    --n_workers 16 \
    --data_path common/picr/fhbhands/test_actions_mf1.0_rf0.25_fct5.0_ec \
    --mode hand_obj \
    --compensate_tsl

HO3Dv1

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dv1 \
    --train_datasets ho3d \
    --train_splits train \
    --val_dataset ho3d \
    --val_split test \
    --split_mode objects \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dv1 \
    --init_ckpt CPF_checkpoints/picr/ho3dv1/checkpoint_300.pth.tar

and reload optimizer:

# hand-only
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 4.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand

# hand-obj
$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dv1/HO3D/test_objects_mf1_likev1_fct5.0_ec/ \
    --lr 1e-2 \
    --n_iter 500  \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 6.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

HO3Dofficial

dump:

$ python training/dumppicr_dist.py  \
    --gpu 0,1 \
    --dist_master_addr localhost \
    --dist_master_port 12356 \
    --exp_keyword ho3dofficial \
    --train_datasets ho3d \
    --train_splits val \
    --val_dataset ho3d \
    --val_split test \
    --split_mode official \
    --batch_size 4 \
    --dump_eval \
    --dump \
    --test_dump \
    --vertex_contact_thresh 0.8 \
    --filter_thresh 5.0 \
    --dump_prefix common/picr_ho3dofficial \
    --init_ckpt CPF_checkpoints/picr/ho3dofficial/checkpoint_300.pth.tar

and reload optimizer:

$ CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python training/optimize.py \
    --n_workers 24 \
    --data_path common/picr_ho3dofficial/HO3D/test_official_mf1_likev1_fct\(x\)_ec/  \
    --lr 1e-2 \
    --n_iter 500 \
    --hodata_no_use_cache \
    --lambda_contact_loss 10.0 \
    --lambda_repulsion_loss 2.0 \
    --repulsion_query 0.030 \
    --repulsion_threshold 0.080 \
    --mode hand_obj

Results

Testing on the full dataset may take a while ( 0.5 ~ 1.5 day ), thus we also provide our test results at fitting_res.txt.

K-MANO

We provide pytorch implementation of our Kinematic-chained MANO in lixiny/manopth, which is modified from the original hassony2/manopth. Thank Yana Hasson for providing the code.

Citation

If you find this work helpful, please consider citing us:

@article{yang2020cpf,
  title={CPF: Learning a Contact Potential Field to Model the Hand-object Interaction},
  author={Yang, Lixin and Zhan, Xinyu and Li, Kailin and Xu, Wenqiang and Li, Jiefeng and Lu, Cewu},
  journal={arXiv preprint arXiv:2012.00924},
  year={2020}
}

And if you have any question or suggestion, do not hesitate to contact me through siriusyang[at]sjtu[dot]edu[dot]cn.

Comments
  • FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    FileNotFoundError: [Errno 2] No such file or directory: 'assets/mano/MANO_RIGHT.pkl'

    I executed this command: python training/run_demo.py --gpu 0 --init_ckpt CPF_checkpoints/picr/fhb/checkpoint_200.pth.tar --honet_mano_fhb_hand

    image

    So, I moved assets/mano folder to the path CPF/manopth/mano/webuser/ But, I am still getting the error

    opened by anjugopinath 3
  •  AttributeError: 'ParsedRequirement' object has no attribute 'req'

    AttributeError: 'ParsedRequirement' object has no attribute 'req'

    Could you tell me which version of Anaconda to use please? I am getting the below error:

    neptune:/s/red/a/nobackup/vision/anju/CPF$ conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: done

    ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.1

    Please update conda by running

    $ conda update -n base -c defaults conda
    

    Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: | Ran pip subprocess with arguments: ['/s/chopin/a/grad/anju/.conda/envs/cpf/bin/python', '-m', 'pip', 'install', '-U', '-r', '/s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt'] Pip subprocess output: Collecting git+https://github.com/utiasSTARS/liegroups.git (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 1)) Cloning https://github.com/utiasSTARS/liegroups.git to /tmp/pip-req-build-ey_prxpa Obtaining file:///s/red/a/nobackup/vision/anju/CPF/manopth (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 12)) Obtaining file:///s/red/a/nobackup/vision/anju/CPF (from -r /s/red/a/nobackup/vision/anju/CPF/condaenv.agtpjn0v.requirements.txt (line 13)) Collecting trimesh==3.8.10 Using cached trimesh-3.8.10-py3-none-any.whl (625 kB) Collecting open3d==0.10.0.0 Using cached open3d-0.10.0.0-cp38-cp38-manylinux1_x86_64.whl (4.7 MB) Collecting pyrender==0.1.43 Using cached pyrender-0.1.43-py3-none-any.whl (1.2 MB) Collecting scikit-learn==0.23.2 Using cached scikit_learn-0.23.2-cp38-cp38-manylinux1_x86_64.whl (6.8 MB) Collecting chumpy==0.69 Using cached chumpy-0.69.tar.gz (50 kB)

    Pip subprocess error: Running command git clone -q https://github.com/utiasSTARS/liegroups.git /tmp/pip-req-build-ey_prxpa ERROR: Command errored out with exit status 1: command: /s/chopin/a/grad/anju/.conda/envs/cpf/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"'; file='"'"'/tmp/pip-install-hnf78qhk/chumpy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-k7bp5gq7 cwd: /tmp/pip-install-hnf78qhk/chumpy/ Complete output (7 lines): Traceback (most recent call last): File "", line 1, in File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] File "/tmp/pip-install-hnf78qhk/chumpy/setup.py", line 15, in install_requires = [str(ir.req) for ir in install_reqs] AttributeError: 'ParsedRequirement' object has no attribute 'req' ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

    failed

    CondaEnvException: Pip failed

    opened by anjugopinath 3
  • How to use CPF on both hands?

    How to use CPF on both hands?

    Thanks a lot for your great work! I have a question: Since you only use the MANO_RIGHT.pkl, it seems that CPF currently can only construct right hand model, right? What is needed to be modified to use CPF on both hands? Thanks!

    opened by buaacyw 3
  • Error when executing command

    Error when executing command "conda env create -f environment.yaml"

    Hi,

    I get the below error when executing the command "conda env create -f environment.yaml"

    CondaError: Downloaded bytes did not match Content-Length url: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/linux-64/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 target_path: /home/anju/anaconda3/pkgs/pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0.tar.bz2 Content-Length: 564734769 downloaded bytes: 221675180

    opened by anjugopinath 1
  • Some questions about PiQR code

    Some questions about PiQR code

    In the contacthead.py, the three decoders have different input dimension. self.vertex_contact_decoder = PointNetDecodeModule(self._concat_feat_dim, 1) self.contact_region_decoder = PointNetDecodeModule(self._concat_feat_dim + 1, self.n_region) self.anchor_elasti_decoder = PointNetDecodeModule(self._concat_feat_dim + 17, self.n_anchor)

    I am wondering if this part is used to predict selected anchor points within each subregion.

    The classification of subregions is obtained by contact_region_decoder and then the anchor points are predicted by anchor_elasti_decoder, is it right ?

    I am a little bit confused about it, because according to the paper, Anchor Elasticity (AE) represents the elasticities of the attractive springs. But in the code, the output of anchor_elasti_decoder has no relation to the elasticity parameter, I'm wondering if there's some part I've missed.

    Sorry for any trouble caused and thanks for your help!

    opened by lym29 0
  • what's the meaning of

    what's the meaning of "adapt"?

    I notice that there are hand_pose_axisang_adapt_np and hand_pose_axisang_np in your code. Could you please explain what's the difference between them?

    opened by Yamato-01 5
  • Expected code date ?

    Expected code date ?

    Hi !

    I just read through your paper, congratulation on the great work ! I love the fact that you provide an anatomically-constrained MANO, and the per-object-vertex hand part affinity.

    I look forward to the code realease :)

    Do you have a planned date in mind ?

    All the best,

    Yana

    opened by hassony2 4
Releases(v1.0.0)
Owner
Lixin YANG
PhD student @ SJTU. Computer Vision, Robotic Vision and Hand-obj Interaction
Lixin YANG
This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".

ResT By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the official implement

zhql 222 Dec 13, 2022
Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals"

The Temporal Robustness of Stochastic Signals Code needed to reproduce the examples found in "The Temporal Robustness of Stochastic Signals" Case stud

0 Oct 28, 2021
[CVPR'21] Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild

IVOS-W Paper Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild Zhaoyun Yin, Jia Zheng, Weixin Luo, Shenhan Qian, Hanli

SVIP Lab 38 Dec 12, 2022
This repository contains the official MATLAB implementation of the TDA method for reverse image filtering

ReverseFilter TDA This repository contains the official MATLAB implementation of the TDA method for reverse image filtering proposed in the paper: "Re

Fergaletto 2 Dec 13, 2021
PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

Study-CSRNet-pytorch This is the PyTorch version repo for CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

0 Mar 01, 2022
Maximum Spatial Perturbation for Image-to-Image Translation (Official Implementation)

MSPC for I2I This repository is by Yanwu Xu and contains the PyTorch source code to reproduce the experiments in our CVPR2022 paper Maximum Spatial Pe

51 Dec 14, 2022
Efficiently Disentangle Causal Representations

Efficiently Disentangle Causal Representations Install dependency pip install -r requirements.txt Main experiments Causality direction prediction cd

4 Apr 01, 2022
This is the code repository for the paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (NeurIPS 2021).

Code Repository for the Paper "Identification of the Generalized Condorcet Winner in Multi-dueling Bandits" (To appear in: Proceedings of NeurIPS20

1 Oct 03, 2022
[CVPR2021 Oral] UP-DETR: Unsupervised Pre-training for Object Detection with Transformers

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers This is the official PyTorch implementation and models for UP-DETR paper: @a

dddzg 430 Dec 23, 2022
git《Tangent Space Backpropogation for 3D Transformation Groups》(CVPR 2021) GitHub:1]

LieTorch: Tangent Space Backpropagation Introduction The LieTorch library generalizes PyTorch to 3D transformation groups. Just as torch.Tensor is a m

Princeton Vision & Learning Lab 482 Jan 06, 2023
Discord bot for notifying on github events

Git-Observer Discord bot for notifying on github events ⚠️ This bot is meant to write messages to only one channel (implementing this for multiple pro

ilu_vatar_ 0 Apr 19, 2022
FPGA: Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification

FPGA & FreeNet Fast Patch-Free Global Learning Framework for Fully End-to-End Hyperspectral Image Classification by Zhuo Zheng, Yanfei Zhong, Ailong M

Zhuo Zheng 92 Jan 03, 2023
[CIKM 2021] Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning

Enhancing Aspect-Based Sentiment Analysis with Supervised Contrastive Learning. This repo contains the PyTorch code and implementation for the paper E

Akuchi 18 Dec 22, 2022
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 05, 2023
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models Code accompanying CVPR'20 paper of the same title. Paper lin

Alex Damian 7k Dec 30, 2022
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 02, 2023
Realtime_Multi-Person_Pose_Estimation

Introduction Multi Person PoseEstimation By PyTorch Results Require Pytorch Installation git submodule init && git submodule update Demo Download conv

tensorboy 1.3k Jan 05, 2023
Türkiye Canlı Mobese Görüntülerinde Profesyonel Nesne Takip Sistemi

Türkiye Mobese Görüntü Takip Türkiye Mobese görüntülerinde OPENCV ve Yolo ile takip sistemi Multiple Object Tracking System in Turkish Mobese with OPE

15 Dec 22, 2022
ELSED: Enhanced Line SEgment Drawing

ELSED: Enhanced Line SEgment Drawing This repository contains the source code of ELSED: Enhanced Line SEgment Drawing the fastest line segment detecto

Iago Suárez 125 Dec 31, 2022