ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)

Overview

ICON: Implicit Clothed humans Obtained from Normals

Yuliang Xiu ยท Jinlong Yang ยท Dimitrios Tzionas ยท Michael J. Black

CVPR 2022

Logo


PyTorch Lightning

Paper PDF Project Page Youtube Video Google Colab Discord Room



News ๐Ÿšฉ


Table of Contents
  1. Who needs ICON
  2. TODO
  3. Installation
  4. Dataset Preprocess
  5. Demo
  6. Citation
  7. Acknowledgments
  8. License
  9. Disclosure
  10. Contact


Who needs ICON?

  • Given an RGB image, you could get:
    • image (png): segmentation, normal images (body + cloth), overlap result (rgb + normal)
    • mesh (obj): SMPL-(X) body, reconstructed clothed human
    • video (mp4): self-rotated clothed human
Intermediate Results
ICON's intermediate results
Final ResultsFinal Results
ICON's normal prediction + reconstructed mesh (w/o & w/ smooth)
  • If you want to create a realistic and animatable 3D clothed avatar direclty from video / sequential images
    • fully-textured with per-vertex color
    • can be animated by SMPL pose parameters
    • natural pose-dependent clothing deformation
ICON+SCANimate+AIST++
3D Clothed Avatar, created from 400+ images using ICON+SCANimate, animated by AIST++

TODO

  • testing code and pretrained models (*self-implemented version)
    • ICON (w/ & w/o global encoder, w/ PyMAF/HybrIK/PIXIE/PARE as HPS)
    • PIFu* (RGB image + predicted normal map as input)
    • PaMIR* (RGB image + predicted normal map as input, w/ PyMAF/PARE as HPS)
  • colab notebook Google Colab
  • dataset processing pipeline
  • training and evaluation codes
  • Video-to-Avatar module

Installation

Please follow the Installation Instruction to setup all the required packages, extra data, and models.

Dataset Preprocess

Please follow the Data Preprocess Instruction to generate the train/val/test dataset from raw scans (THuman2.0).

Demo

cd ICON/apps

# PIFu* (*: re-implementation)
python infer.py -cfg ../configs/pifu.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# PaMIR* (*: re-implementation)
python infer.py -cfg ../configs/pamir.yaml -gpu 0 -in_dir ../examples -out_dir ../results

# ICON w/ global filter (better visual details --> lower Normal Error))
python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python infer.py -cfg ../configs/icon-nofilter.yaml -gpu 0 -in_dir ../examples -out_dir ../results -hps_type {pixie/pymaf/pare/hybrik}

More Qualitative Results

Comparison
Comparison with other state-of-the-art methods
extreme
Predicted normals on in-the-wild images with extreme poses


Citation

@inproceedings{xiu2022icon,
  title={{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
  author={Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
  booktitle={IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year={2022}
}

Acknowledgments

We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.

Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Unionโ€™s Horizon 2020 research and innovation programme under the Marie Skล‚odowska-Curie grant agreement No.860768 (CLIPE Project).

License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact

For more questions, please contact [email protected]

For commercial licensing, please contact [email protected]

Issues
  • OpenGL.raw.EGL._errors.EGLError: EGLError( )

    OpenGL.raw.EGL._errors.EGLError: EGLError( )

    When I run the command of "bash render_batch.sh debug", it gives an error as following

    OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_NOT_INITIALIZED, baseOperation = eglInitialize, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f7b3d0ee2c0>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d0ee440>, <importlib._bootstrap.LP_c_int object at 0x7f7b3d106bc0>, ), result = 0 )

    How can I fix this?

    documentation Dataset 
    opened by Yuhuoo 15
  • Trouble getting ICON results

    Trouble getting ICON results

    After installing all packages, I got the results successfully for PIFu and PaMIR. I faced the runtime error when trying to get the ICON demo result. Could you guide what setting was wrong?

    $ python infer.py -cfg ../configs/icon-filter.yaml -gpu 0 -in_dir ../examples -out_dir ../results
    
    Traceback (most recent call last):
      File "infer.py", line 304, in <module>
        verts_pr, faces_pr, _ = model.test_single(in_tensor)
      File "./ICON/apps/ICON.py", line 738, in test_single
        sdf = self.reconEngine(opt=self.cfg,
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 148, in forward
        return self._forward_faster(**kwargs)
      File "../lib/common/seg3d_lossless.py", line 170, in _forward_faster
        occupancys = self.batch_eval(coords, **kwargs)
      File "../lib/common/seg3d_lossless.py", line 139, in batch_eval
        occupancys = self.query_func(**kwargs, points=coords2D)
      File "../lib/common/train_util.py", line 338, in query_func
        preds = netG.query(features=features,
      File "../lib/net/HGPIFuNet.py", line 285, in query
        smpl_sdf, smpl_norm, smpl_cmap, smpl_ind = cal_sdf_batch(
      File "../lib/dataset/mesh_util.py", line 231, in cal_sdf_batch
        residues, normals, pts_cmap, pts_ind = func(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/mesh_distance.py", line 79, in forward
        output = self.search_tree(triangles, points)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
        result = self.forward(*input, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 109, in forward
        output = BVHFunction.apply(
      File "./.virtualenvs/icon/lib/python3.8/site-packages/bvh_distance_queries/bvh_search_tree.py", line 42, in forward
        outputs = bvh_distance_queries_cuda.distance_queries(
    RuntimeError: after reduction step 1: cudaErrorInvalidDevice: invalid device ordinal
    
    CUDA 
    opened by Samiepapa 12
  • THuman Dataset preprocess

    THuman Dataset preprocess

    Hi, I found the program was running so slowly when I ran bash render_batch.sh debug all, I figured out it stopped at hits = mesh.ray.intersects_any (origins + delta * normals, vectors), and the number of rays is the millions, is that the reason why it was too slow?

    documentation Dataset 
    opened by mmmcn 11
  • ConnectionError: HTTPSConnectionPool

    ConnectionError: HTTPSConnectionPool

    requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1tCU5MM1LhRgGou5OpmpjBQbSrYIUoYab (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbc4ad7deb0>: Failed to establish a new connection: [Errno 110] Connection timed out'))

    documentation 
    opened by shuoshuoxu 11
  • Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Error: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    Getting this error when installing locally on my workstation via colab bash script.

    .../ICON/pytorch3d/pytorch3d/_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNSt15__exception_ptr13exception_ptr10_M_releaseEv

    This after installing pytorch3d locally as recommended. Conda has too many conflicts and never resolves.

    Installing torch through pip works (1.8.2+cu111) up until the next steps of infer.py because bvh_distance_queries only supports cuda 11.0. This would most likely require compiling against 11.0, but it will probably lead to more errors as I don't know what this repository's dependencies require as far as torch goes.

    CUDA 
    opened by ftaker887 9
  • ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    ModuleNotFoundError: No module named 'bvh_distance_queries_cuda'

    Hi, thank you so much for the wonderful work and corresponding codes. I am facing the following issue: https://github.com/YuliangXiu/ICON/blob/0045bd10f076bf367d25b7dac41d0d5887b8694f/lib/bvh-distance-queries/bvh_distance_queries/bvh_search_tree.py#L27

    Is there any .py file called bvh_distance_queries_cuda ? Please let me know a possible solution. Thank you for your effort and help :) :) :)

    CUDA 
    opened by Pallab38 7
  • Some questions about training

    Some questions about training

    I would like to know some details about training. Is the Ground-Truth SMPL or the predicted SMPL used in training ICON? Also, what about normal images? According to my understanding of the paper and practice, ICON should train the normal network first and then train the implicit reconstruction network. When I reproduce ICON, I don't know whether to choose the Ground-Truth or the predicted data for SMPL model and normal images, respectively.

    Dataset Training 
    opened by sunjc0306 6
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    When I run the "CUDA_VISIBLE_DEVICES=0 python train.py -cfg ../configs/train/icon-filter.yaml" after the data ready, I have got the error of " Segmentation fault (core dumped) " Then I tried to reboot the machine, but it didn't work, and the error still exists.

    opened by Yuhuoo 5
  • problem when use pixie as hps_type

    problem when use pixie as hps_type

    Sorry to open similar issue.

    It's an issue that came up before, but it's not solved well, so I'm asking you again #30 related to iss_30

    The problem has not been solved for about five days.

    It was changed to 5.1.1 according to the advice of the issue, but the problem remained the same and restored to the latest version of PyYAML. Is there any other solution?

    Sorry, try to use PyYAML==5.1.1

    Traceback (most recent call last):
      File "infer.py", line 96, in <module>
        dataset = TestDataset(dataset_param, device)
      File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 105, in __init__
        self.hps = PIXIE(config = pixie_cfg, device=self.device)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 49, in __init__
        self._create_model()
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 115, in _create_model
        self.smplx = SMPLX(self.cfg.model).to(self.device)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/models/SMPLX.py", line 156, in __init__
        self.extra_joint_selector = JointsFromVerticesSelector(
      File "/workspace/fashion-ICON/apps/../lib/pixielib/models/lbs.py", line 399, in __init__
        data = yaml.load(f)
    TypeError: load() missing 1 required positional argument: 'Loader'
    

    by original repo's issue (https://github.com/YuliangXiu/ICON/issues/30)

    refer to upper comments, this error might be resolved by install PyYAML==5.1.1 but It makes error again

    Traceback (most recent call last):
      File "infer.py", line 102, in <module>
        for data in pbar:
      File "/opt/conda/envs/icon/lib/python3.8/site-packages/tqdm/std.py", line 1180, in __iter__
        for obj in iterable:
      File "/workspace/fashion-ICON/apps/../lib/dataset/TestDataset.py", line 191, in __getitem__
        preds_dict = self.hps.forward(img_hps.to(self.device))
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 56, in forward
        param_dict = self.encode({'body': {'image': data}}, threthold=True, keep_local=True, copy_and_paste=False)
      File "/opt/conda/envs/icon/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
        return func(*args, **kwargs)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 259, in encode
        cropped_image, cropped_joints_dict = self.part_from_body(image_hd, part_name, points_dict)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/pixie.py", line 166, in part_from_body
        cropped_image, tform = self.Cropper[cropper_key].crop(
      File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 98, in crop
        cropped_image, tform = crop_tensor(image, center, bbox_size, self.crop_size)
      File "/workspace/fashion-ICON/apps/../lib/pixielib/utils/tensor_cropper.py", line 78, in crop_tensor
        cropped_image = warp_affine(
    TypeError: warp_affine() got an unexpected keyword argument 'flags'
    

    Are there any solution to run pixie module?

    HPS 
    opened by pinga999 4
  • Bugs when infer on another gpu instead of gpu 0

    Bugs when infer on another gpu instead of gpu 0

    Thanks for your great work. I want to use another gpu to run the demo, so I modify https://github.com/YuliangXiu/ICON/blob/53273e081cbc15e3afeba098f067a32cd4db4771/apps/infer.py#L71 to

       os.environ['CUDA_VISIBLE_DEVICES'] = "0,1,2,3,4,5,6,7"
    

    Then I run

     python infer.py -cfg ../configs/icon-filter.yaml -gpu 1 -in_dir ../examples -out_dir ../results -hps_type pixie
    

    But errors occur. 1 Then I find out that the implementation of point_to_mesh_distance in kaolin use .cuda() to force tensors on gpu0.
    https://github.com/YuliangXiu/ICON/blob/53273e081cbc15e3afeba098f067a32cd4db4771/lib/dataset/mesh_util.py#L282 So I modify https://github.com/NVIDIAGameWorks/kaolin/blob/54d8fa438f8987444637f80da02bb0b862d3694d/kaolin/metrics/trianglemesh.py#L116-L118 to

            min_dist = torch.zeros((num_points), dtype=points.dtype).to(points.device)
            min_dist_idx = torch.zeros((num_points), dtype=torch.long).to(points.device)
            dist_type = torch.zeros((num_points), dtype=torch.int32).to(points.device)
    

    and reinstall kaolin from local. Then I meet one of the following errors. 2 or 3 Using gpu 2, gpu 3,... will also trigger the error. Could you help me to solve these errors.

    Inference 
    opened by hoyeYang 4
  • Dimension Mismatch Issue in running on local pc

    Dimension Mismatch Issue in running on local pc

    Hey @YuliangXiu , I tried to setup the complete dependency on my Ubuntu 18.04 PC (pytorch 1.6,Cuda 10.1) with all the dependencies in the requirements.txt one by one. Actually faced a lot of issues in the above procedure. After that I was getting issue in loading model in rembg module. So I manually downloaded the model file and modified the rembg accordingly. It takes corrected the process_image function in lib/pymaf/utils/imutils.py.

    This produces hps_img having shape [3,224,224] in my case, that is being further fed to pymaf_net.py in line 282 to extract features using the defined backbone (res50). But this backbone expects input size as [64, 3, 7, 7]. And that's why i'm getting dimension mismatch runtime error.

    Note:- I have modified the image_to_pymaf_tensor in get_transformer() from lib/pymaf/utils/imutils.py as per my pytorch version .

    image_to_pymaf_tensor = transforms.Compose([
            transforms.ToPILImage(),                   #Added by us
            transforms.Resize(224),
            transforms.ToTensor(),                     #Added by us
            transforms.Normalize(mean=constants.IMG_NORM_MEAN,
                                 std=constants.IMG_NORM_STD)
        ])
    
    ICON:
    [w/ Global Image Encoder]: True
    [Image Features used by MLP]: ['normal_F', 'normal_B']
    [Geometry Features used by MLP]: ['sdf', 'norm', 'vis', 'cmap']
    [Dim of Image Features (local)]: 6
    [Dim of Geometry Features (ICON)]: 7
    [Dim of MLP's first layer]: 13
    
    initialize network with xavier
    initialize network with xavier
    Resume MLP weights from ../data/ckpt/icon-filter.ckpt
    Resume normal model from ../data/ckpt/normal.ckpt
    Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
    Using cache found in /home/ujjawal/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub
    Dataset Size: 2
      0%|                                                                                                                                             | 0/2 [00:00<?, ?it/s]*********************************
    img_np shape: (512, 512, 3)
    img_hps shape: torch.Size([3, 224, 224])
    input shape x in pymaf_net : torch.Size([3, 224, 224])
    input shape x in hmr : torch.Size([3, 224, 224])
      0%|                                                                                                                                             | 0/2 [00:01<?, ?it/s]
    Traceback (most recent call last):
      File "infer.py", line 97, in <module>
        for data in pbar:
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/tqdm/std.py", line 1130, in __iter__
        for obj in iterable:
      File "../lib/dataset/TestDataset.py", line 166, in __getitem__
        preds_dict = self.hps(img_hps.to(self.device))
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/pymaf/models/pymaf_net.py", line 285, in forward
        s_feat, g_feat = self.feature_extractor(x)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "../lib/pymaf/models/hmr.py", line 159, in forward
        x = self.conv1(x)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 419, in forward
        return self._conv_forward(input, self.weight)
      File "/home/ujjawal/miniconda2/envs/caffe2/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
        self.padding, self.dilation, self.groups)
    RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead
    
    

    Please suggest your view on the same.

    opened by ujjawalcse 4
  • hope for suggestion about solving EGL error

    hope for suggestion about solving EGL error

    Hi, I am very interested in your work and want to try. But I got an EGL error when I run bash scripts/render_batch.sh debug all๏ผš

    Rendering thuman2 0300
    Traceback (most recent call last):
      File "scripts/render_single.py", line 45, in <module>
        initialize_GL_context(width=size, height=size, egl=egl)
      File "/home/lws/code/ICON-master/lib/renderer/gl/init_gl.py", line 23, in initialize_GL_context
        create_opengl_context((width, height))
      File "/home/lws/code/ICON-master/lib/renderer/gl/glcontext.py", line 115, in create_opengl_context
        egl_surf = egl.eglCreatePbufferSurface(egl_display, egl_cfg,
      File "/home/lws/anaconda3/envs/icon/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 415, in __call__
        return self( *args, **named )
      File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
    OpenGL.raw.EGL._errors.EGLError: EGLError(
            err = EGL_BAD_CONFIG,
            baseOperation = eglCreatePbufferSurface,
            cArguments = (
                    <OpenGL._opaque.EGLDisplay_pointer object at 0x7efe7087f640>,
                    <OpenGL._opaque.EGLConfig_pointer object at 0x7efe7087fdc0>,
                    <lib.renderer.gl.glcontext.c_int_Array_5 object at 0x7efe7087fe40>,
            ),
            result = <OpenGL._opaque.EGLSurface_pointer object at 0x7efe6e58a140>
    )
    thuman2 END----------
    
    

    I try to solve it and google for the solutions. Have you met the problem before or can you give me any advice?

    opened by liwenssss 2
  • RuntimeError: CUDA error: invalid device function

    RuntimeError: CUDA error: invalid device function

    bash scripts/vis_batch.sh debug all

    thuman2 START----------
    Debug visibility
    Visibility thuman2 0300
    Traceback (most recent call last):
      File "scripts/vis_single.py", line 58, in <module>
        smpl_vis = get_visibility(xy, z, smpl_faces)
      File "/home/liaoqi/Code/ICON/lib/dataset/mesh_util.py", line 207, in get_visibility
        pix_to_face, zbuf, bary_coords, dists = rasterize_meshes(
      File "/home/liaoqi/Code/ICON/pytorch3d/pytorch3d/renderer/mesh/rasterize_meshes.py", line 234, in rasterize_meshes
        pix_to_face, zbuf, barycentric_coords, dists = _RasterizeFaceVerts.apply(
    File "/home/liaoqi/Code/ICON/pytorch3d/pytorch3d/renderer/mesh/rasterize_meshes.py", line 308, in forward
        pix_to_face, zbuf, barycentric_coords, dists = _C.rasterize_meshes(
    RuntimeError: CUDA error: invalid device function
    scripts/vis_single.sh: line 32: 15195 Segmentation fault      (core dumped) python $PYTHON_SCRIPT -s $SUBJECT -o $SAVE_DIR -r $NUM_VIEWS -m $MODE
    thuman2 END----------
    

    ่ฏท้—ฎ่ฟ™ๆ˜ฏไป€ไนˆ้—ฎ้ข˜ๅ‘ข๏ผŸๆˆ‘ๆ˜ฏcuda10.0 pytorch3d 0.6.2 pytorch 1.6.0

    opened by Andyen512 0
  • How to obtain the centimeter unit๏ผŸ

    How to obtain the centimeter unit๏ผŸ

    Hello! Excuse me again! I would like to know what is the basis for the unit of centermeter in the paper. As far as I know, since each dataset has different units in their model coordinate system, they need to be normalized to a reference space ๏ผˆeg:[-0.5,-0.5,-0.5]*[0.5,0.5,0.5]๏ผ‰when rendering. The normalized model does not have units, and the height of each human is unknown, so how is the unit of centimeters obtained? Thanks!

    opened by sunjc0306 1
  • TypeError: 'NoneType' object is not subscriptable

    TypeError: 'NoneType' object is not subscriptable

    (ICON) [email protected]:/hy-tmp/ICON# python -m apps.infer -cfg ./configs/pifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead. normal environment... PIFU: w/ Global Image Encoder: True Image Features used by MLP: ['image', 'normal_F', 'normal_B'] Dim of Image Features (global): 12 Dim of Geometry Features (PIFu): 1 (z-value) Dim of MLP's first layer: 13

    Using cache found in /root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub Using cache found in /root/.cache/torch/hub/NVIDIA_DeepLearningExamples_torchhub Using pymaf as HPS Estimator

    Dataset Size: 9 Body Fitting = 0.295: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 1.19it/s] Body Fitting = 0.295: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1/1 [00:00<00:00, 1.19it/sNone%| | 0/100 [00:00<?, ?it/s] 22097467bffc92d4a5c4246f7d4edb75: 0%| | 0/9 [00:06<?, ?it/s] Traceback (most recent call last): File "/hy-tmp/envs/ICON/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/hy-tmp/envs/ICON/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/hy-tmp/ICON/apps/infer.py", line 324, in verts_pr, faces_pr, _ = model.test_single(in_tensor) File "/hy-tmp/ICON/apps/ICON.py", line 746, in test_single verts_pr, faces_pr = self.reconEngine.export_mesh(sdf) File "/hy-tmp/ICON/lib/common/seg3d_lossless.py", line 586, in export_mesh final = occupancys[:-1, :-1, :-1].contiguous() TypeError: 'NoneType' object is not subscriptable

    When I run the demo, it was be an error as above. Why was this problem happened?

    opened by Yuhuoo 2
  • add web demo/model to Huggingface

    add web demo/model to Huggingface

    Hi, would you be interested in adding ICON to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. Models/datasets/spaces(web demos) can be added to a user account or organization similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/salesforce/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    enhancement 
    opened by AK391 5
Releases(v.1.0.0-rc2)
  • v.1.0.0-rc2(Mar 7, 2022)

    Some updates:

    • HPS support: PyMAF (SMPL), PARE (SMPL), PIXIE (SMPL-X)
    • Google Colab support
    • Replace bvh-distance-queries with PyTorch3D and Kaolin to improve CUDA compatibility
    • Fix some issues
    Source code(tar.gz)
    Source code(zip)
  • v.1.0.0-rc1(Jan 30, 2022)

    First commit of ICON:

    • image-based inference code
    • pretrained model of ICON, PIFu*, PaMIR* (*: self-implementation)
    • homepage: https://icon.is.tue.mpg.de
    Source code(tar.gz)
    Source code(zip)
Owner
Yuliang Xiu
Ph.D. Student in Graphics & Vision, 3D Virtual Avatar Researcher, Play with pixels and voxels.
Yuliang Xiu
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 56 Nov 29, 2021
A library for hidden semi-Markov models with explicit durations

hsmmlearn hsmmlearn is a library for unsupervised learning of hidden semi-Markov models with explicit durations. It is a port of the hsmm package for

Joris Vankerschaver 57 Dec 09, 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MEGVII Research 17 Jan 09, 2022
Deep Q-learning for playing chrome dino game

[PYTORCH] Deep Q-learning for playing Chrome Dino

Viet Nguyen 65 Jan 10, 2022
Official implementation of VQ-Diffusion

Official implementation of VQ-Diffusion: Vector Quantized Diffusion Model for Text-to-Image Synthesis

Microsoft 222 Dec 30, 2021
Fast convergence of detr with spatially modulated co-attention

Fast convergence of detr with spatially modulated co-attention Usage There are no extra compiled components in SMCA DETR and package dependencies are

peng gao 83 Dec 27, 2021
Securetar - A streaming wrapper around python tarfile and allow secure handling files and support encryption

Secure Tar Secure Tarfile library It's a streaming wrapper around python tarfile

Pascal Vizeli 1 Feb 16, 2022
Pixray is an image generation system

Pixray is an image generation system

pixray 368 Feb 09, 2022
PConv-Keras - Unofficial implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions". Try at: www.fixmyphoto.ai

Partial Convolutions for Image Inpainting using Keras Keras implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions", https

Mathias Gruber 840 Jan 22, 2022
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"

Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts The rapid progress in 3D scene understanding has come with growing dem

Facebook Research 119 Jan 13, 2022
In this tutorial, you will perform inference across 10 well-known pre-trained object detectors and fine-tune on a custom dataset. Design and train your own object detector.

Object Detection Object detection is a computer vision task for locating instances of predefined objects in images or videos. In this tutorial, you wi

Ibrahim Sobh 54 Apr 22, 2022
Segment axon and myelin from microscopy data using deep learning

Segment axon and myelin from microscopy data using deep learning. Written in Python. Using the TensorFlow framework. Based on a convolutional neural network architecture. Pixels are classified as eit

NeuroPoly 97 Jan 16, 2022
Equivariant Imaging: Learning Beyond the Range Space

Equivariant Imaging: Learning Beyond the Range Space Equivariant Imaging: Learning Beyond the Range Space Dongdong Chen, Juliรกn Tachella, Mike E. Davi

Dongdong Chen 18 Jan 11, 2022
Advanced yabai wooting scripts

Yabai Wooting scripts Installation requirements Both https://github.com/xiamaz/python-yabai-client and https://github.com/xiamaz/python-wooting-rgb ne

Max Zhao 3 Dec 30, 2021
Real-time pose estimation accelerated with NVIDIA TensorRT

trt_pose Want to detect hand poses? Check out the new trt_pose_hand project for real-time hand pose and gesture recognition! trt_pose is aimed at enab

NVIDIA AI IOT 681 Jan 16, 2022
๐Ÿ‘OpenHands : Making Sign Language Recognition Accessible (WiP ๐Ÿšง๐Ÿ‘ทโ€โ™‚๏ธ๐Ÿ—)

๐Ÿ‘ OpenHands: Sign Language Recognition Library Making Sign Language Recognition Accessible Check the documentation on how to use the library: ReadThe

AI4Bhฤrat 9 Dec 22, 2021
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 25, 2021
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 140 Jan 26, 2022
Streamlit tool to explore coco datasets

What is this This tool given a COCO annotations file and COCO predictions file will let you explore your dataset, visualize results and calculate impo

Jakub Cieslik 62 Dec 30, 2021
PyTorch implementation of Deformable Convolution

Deformable Convolutional Networks in PyTorch This repo is an implementation of Deformable Convolution. Ported from author's MXNet implementation. Buil

413 Feb 04, 2022