This is an official implementation of the High-Resolution Transformer for Dense Prediction.

Overview

High-Resolution Transformer for Dense Prediction

Introduction

This is the official implementation of High-Resolution Transformer (HRT). We present a High-Resolution Transformer (HRT) that learns high-resolution representations for dense prediction tasks, in contrast to the original Vision Transformer that produces low-resolution representations and has high memory and computational cost. We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet), along with local-window self-attention that performs self-attention over small non-overlapping image windows, for improving the memory and computation efficiency. In addition, we introduce a convolution into the FFN to exchange information across the disconnected image windows. We demonstrate the effectiveness of the High-Resolution Transformeron human pose estimation and semantic segmentation tasks.

  • The High-Resolution Transformer architecture:

teaser

Pose estimation

2d Human Pose Estimation

Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset

Backbone Input Size AP AP50 AP75 ARM ARL AR ckpt log script
HRT-S 256x192 74.0% 90.2% 81.2% 70.4% 80.7% 79.4% ckpt log script
HRT-S 384x288 75.6% 90.3% 82.2% 71.6% 82.5% 80.7% ckpt log script
HRT-B 256x192 75.6% 90.8% 82.8% 71.7% 82.6% 80.8% ckpt log script
HRT-B 384x288 77.2% 91.0% 83.6% 73.2% 84.2% 82.0% ckpt log script

Results on COCO test-dev with detector having human AP of 56.4 on COCO val2017 dataset

Backbone Input Size AP AP50 AP75 ARM ARL AR ckpt log script
HRT-S 384x288 74.5% 92.3% 82.1% 70.7% 80.6% 79.8% ckpt log script
HRT-B 384x288 76.2% 92.7% 83.8% 72.5% 82.3% 81.2% ckpt log script

The models are first pre-trained on ImageNet-1K dataset, and then fine-tuned on COCO val2017 dataset.

Semantic segmentation

Cityscapes

Performance on the Cityscapes dataset. The models are trained and tested with input size of 512x1024 and 1024x2048 respectively.

Methods Backbone Window Size Train Set Test Set Iterations Batch Size OHEM mIoU mIoU (Multi-Scale) Log ckpt script
OCRNet HRT-S 7x7 Train Val 80000 8 Yes 80.0 81.0 log ckpt script
OCRNet HRT-B 7x7 Train Val 80000 8 Yes 81.4 82.0 log ckpt script
OCRNet HRT-B 15x15 Train Val 80000 8 Yes 81.9 82.6 log ckpt script

PASCAL-Context

The models are trained with the input size of 520x520, and tested with original size.

Methods Backbone Window Size Train Set Test Set Iterations Batch Size OHEM mIoU mIoU (Multi-Scale) Log ckpt script
OCRNet HRT-S 7x7 Train Val 60000 16 Yes 53.8 54.6 log ckpt script
OCRNet HRT-B 7x7 Train Val 60000 16 Yes 56.3 57.1 log ckpt script
OCRNet HRT-B 15x15 Train Val 60000 16 Yes 57.6 58.5 log ckpt script

COCO-Stuff

The models are trained with input size of 520x520, and tested with original size.

Methods Backbone Window Size Train Set Test Set Iterations Batch Size OHEM mIoU mIoU (Multi-Scale) Log ckpt script
OCRNet HRT-S 7x7 Train Val 60000 16 Yes 37.9 38.9 log ckpt script
OCRNet HRT-B 7x7 Train Val 60000 16 Yes 41.6 42.5 log ckpt script
OCRNet HRT-B 15x15 Train Val 60000 16 Yes 42.4 43.3 log ckpt script

ADE20K

The models are trained with input size of 520x520, and tested with original size. The results with window size 15x15 will be updated latter.

Methods Backbone Window Size Train Set Test Set Iterations Batch Size OHEM mIoU mIoU (Multi-Scale) Log ckpt script
OCRNet HRT-S 7x7 Train Val 150000 8 Yes 44.0 45.1 log ckpt script
OCRNet HRT-B 7x7 Train Val 150000 8 Yes 46.3 47.6 log ckpt script
OCRNet HRT-B 13x13 Train Val 150000 8 Yes 48.7 50.0 log ckpt script
OCRNet HRT-B 15x15 Train Val 150000 8 Yes - - - - -

Classification

Results on ImageNet-1K

Backbone [email protected] [email protected] #params FLOPs ckpt log script
HRT-T 78.6% 94.2% 8.0M 1.83G ckpt log script
HRT-S 81.2% 95.6% 13.5M 3.56G ckpt log script
HRT-B 82.8% 96.3% 50.3M 13.71G ckpt log script

Citation

If you find this project useful in your research, please consider cite:

@article{YuanFHZCW21,
  title={HRT: High-Resolution Transformer for Dense Prediction},
  author={Yuhui Yuan and Rao Fu and Lang Huang and Chao Zhang and Xilin Chen and Jingdong Wang},
  booktitle={arXiv},
  year={2021}
}

Acknowledgment

This project is developed based on the Swin-Transformer, openseg.pytorch, and mmpose.

git diff-index HEAD
git subtree add -P pose <url to sub-repo> <sub-repo branch>
Comments
  • Question about Local Self-Attention of your code

    Question about Local Self-Attention of your code

    Hi,I‘m very interested in your work about the Local Self-Attention and feature fusion in Transformer。But I have a doubt that Because the input image size for the image classification task in the source code is fixed, 224 or 384, in other words, the size is an integer multiple of 32. If the input size is not fixed, for example the detection task, the input is 800x1333, although the feature map can be divided into window size windows by using padding, but for the key_ padding_ mask, how should the mask be handled?

    The shape of attention weights map is [bs x H/7 x W/7, 49, 49], default there window size is 7, but the key padding mask shape is [1, HW], so how can I convert this mask to match the attention weights map。

    I sincerely hope you can give me some advice about this question. Thanks !

    opened by Huzhen757 4
  • about pose training speed

    about pose training speed

    the computation cost of HRF-s 256 isd about 2.8G flops. but when i training it, i found that it is significantly slower than the hrnet which have about 7.9 Gflops do you know how to solve it? thanks

    opened by maowayne123 4
  • Is the padding module wrong?

    Is the padding module wrong?

    Hello, I observes in the class PadBlock, the operation you have done is "n (qh ph) (qw pw) c -> (ph pw) (n qh qw) c" which you makes the padding group as batch dim. Therefore, it may cause a problem that you consider the pad-group wise attention across all batches. Do you think the permutation should be "n (qh ph) (qw pw) c -> (n ph pw) (qh qw) c"

    opened by UBCIntelliview 3
  • Need pre-trained model on ImageNet-1K

    Need pre-trained model on ImageNet-1K

    Hi, thanks for your work! I'm trying to train your model in custom config from scratch, but have not found any pre-trained model on ImageNet-1K. Do you plan to share these models?

    opened by WinstonDeng 2
  • undefined symbol: _Z13__THCudaCheck9cudaErrorPKci

    undefined symbol: _Z13__THCudaCheck9cudaErrorPKci

    ` FutureWarning, WARNING:torch.distributed.run:


    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    Traceback (most recent call last): File "tools/train.py", line 168, in main() File "tools/train.py", line 122, in main env_info_dict = collect_env() File "/dataset/wh/wh_code/HRFormer-main/pose/mmpose/utils/collect_env.py", line 8, in collect_env env_info = collect_basic_env() File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/env.py", line 85, in collect_env from mmcv.ops import get_compiler_version, get_compiling_cuda_version File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/ops/init.py", line 1, in from .bbox import bbox_overlaps File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/ops/bbox.py", line 3, in ext_module = ext_loader.load_ext('_ext', ['bbox_overlaps']) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/utils/ext_loader.py", line 12, in load_ext ext = importlib.import_module('mmcv.' + name) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: /home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/_ext.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _Z13__THCudaCheck9cudaErrorPKci ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 42674) of binary: /home/celia/anaconda3/envs/open-mmlab/bin/python Traceback (most recent call last): File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in main() File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run )(*cmd_args) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/celia/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    tools/train.py FAILED

    Failures: [1]: time : 2022-10-24_10:03:43 host : omnisky rank : 1 (local_rank: 1) exitcode : 1 (pid: 42675) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [2]: time : 2022-10-24_10:03:43 host : omnisky rank : 2 (local_rank: 2) exitcode : 1 (pid: 42676) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html [3]: time : 2022-10-24_10:03:43 host : omnisky rank : 3 (local_rank: 3) exitcode : 1 (pid: 42677) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    Root Cause (first observed failure): [0]: time : 2022-10-24_10:03:43 host : omnisky rank : 0 (local_rank: 0) exitcode : 1 (pid: 42674) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================`

    opened by yzew 1
  • Pretrained model for cityscapes

    Pretrained model for cityscapes

    Thanks for your great job. I have some trouble for reproducing the segmentation results of cityscapes. Then I check the log and find out it might be the problem of pretrained models. For now I use the ImageNet model released as pretrained. Can you release the pretrained model for cityscapes? Thanks a lot!

    opened by devillala 1
  • Cuda out of memory on resume (incl. fix)

    Cuda out of memory on resume (incl. fix)

    If ran out of memory with exact same params as in training (which worked). Loading the model first to cpu fixes the problem:

    resume_dict = torch.load(self.configer.get('network', 'resume'),map_location='cpu')

    maybe it helps somebody

    021-08-25 14:51:29,793 INFO [data_helper.py, 126] Input keys: ['img'] 2021-08-25 14:51:29,793 INFO [data_helper.py, 127] Target keys: ['labelmap'] Traceback (most recent call last): File "/home/rsa-key-20190908/HRFormer/seg/main.py", line 541, in model.train() File "/home/rsa-key-20190908/HRFormer/seg/segmentor/trainer.py", line 438, in train self.__train() File "/home/rsa-key-20190908/HRFormer/seg/segmentor/trainer.py", line 187, in __train outputs = self.seg_net(*inputs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 705, in forward output = self.module(*inputs[0], **kwargs[0]) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/nets/hrt.py", line 117, in forward x = self.backbone(x) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/hrt_backbone.py", line 579, in forward y_list = self.stage3(x_list) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/hrt_backbone.py", line 282, in forward x[i] = self.branchesi File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/modules/transformer_block.py", line 103, in forward x = x + self.drop_path(self.attn(self.norm1(x), H, W)) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/modules/multihead_isa_pool_attention.py", line 41, in forward out, _, _ = self.attn(x_permute, x_permute, x_permute, rpe=self.with_rpe, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/modules/multihead_isa_attention.py", line 116, in forward rpe=rpe, File "/home/rsa-key-20190908/HRFormer/seg/lib/models/backbones/hrt/modules/multihead_isa_attention.py", line 311, in multi_head_attention_forward ) + relative_position_bias.unsqueeze(0) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 6.64 GiB already allocated; 27.25 MiB free; 6.66 GiB reserved in total by PyTorch) Killing subprocess 6170

    opened by marcok 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Cannot reproduce the test accuracy.

    Cannot reproduce the test accuracy.

    I tried to run the test of HRFormer on ImageNet-1k, but the test result was strange. The top-1 accuracy is about 2.0%

    Test command

    bash run_eval.sh hrt/hrt_tiny ~/Downloads/hrt_tiny_imagenet_pretrained_top1_786.pth  ~/data/imagenet
    

    Test output

    [2022-09-06 15:00:15 hrt_tiny](main.py 157): INFO number of params: 8035820
    All checkpoints founded in output/hrt_tiny/default: []
    [2022-09-06 15:00:15 hrt_tiny](main.py 184): INFO no checkpoint found in output/hrt_tiny/default, ignoring auto resume
    [2022-09-06 15:00:15 hrt_tiny](utils.py 21): INFO ==============> Resuming form /home/mzr/Downloads/hrt_tiny_imagenet_pretrained_top1_786.pth....................
    [2022-09-06 15:00:15 hrt_tiny](utils.py 31): INFO <All keys matched successfully>
    [2022-09-06 15:00:19 hrt_tiny](main.py 389): INFO Test: [0/391]	Time 4.122 (4.122)	Loss 8.9438 (8.9438)	[email protected] 2.344 (2.344)	[email protected] 4.688 (4.688)	Mem 2309MB
    [2022-09-06 15:00:29 hrt_tiny](main.py 389): INFO Test: [10/391]	Time 1.028 (1.279)	Loss 9.0749 (9.3455)	[email protected] 5.469 (2.486)	[email protected] 12.500 (7.031)	Mem 2309MB
    [2022-09-06 15:00:39 hrt_tiny](main.py 389): INFO Test: [20/391]	Time 1.027 (1.159)	Loss 9.9610 (9.3413)	[email protected] 0.781 (2.269)	[email protected] 4.688 (7.403)	Mem 2309MB
    [2022-09-06 15:00:49 hrt_tiny](main.py 389): INFO Test: [30/391]	Time 0.952 (1.103)	Loss 9.1598 (9.3309)	[email protected] 1.562 (2.293)	[email protected] 7.812 (7.359)	Mem 2309MB
    [2022-09-06 15:00:59 hrt_tiny](main.py 389): INFO Test: [40/391]	Time 0.951 (1.071)	Loss 9.3239 (9.3605)	[email protected] 0.781 (2.210)	[email protected] 4.688 (7.241)	Mem 2309MB
    [2022-09-06 15:01:09 hrt_tiny](main.py 389): INFO Test: [50/391]	Time 0.952 (1.049)	Loss 9.7051 (9.3650)	[email protected] 0.781 (2.191)	[email protected] 3.125 (7.200)	Mem 2309MB
    [2022-09-06 15:01:18 hrt_tiny](main.py 389): INFO Test: [60/391]	Time 0.951 (1.035)	Loss 9.5935 (9.3584)	[email protected] 1.562 (2.075)	[email protected] 7.812 (7.095)	Mem 2309MB
    ...
    

    The environment is brand new according to the install instruction, and the checkpoint is from https://github.com/HRNet/HRFormer/releases/tag/v1.0.0 . The only change is I disabled the amp.

    opened by mzr1996 0
  • cocostuff dataset validation bug

    cocostuff dataset validation bug

    in the segmentation folder -> segmentation_val/segmentor/tester.py line183

    def __relabel(self, label_map):
        height, width = label_map.shape
        label_dst = np.zeros((height, width), dtype=np.uint8)
        for i in range(self.configer.get('data', 'num_classes')):
            label_dst[label_map == i] = self.configer.get('data', 'label_list')[i]
      
        label_dst = np.array(label_dst, dtype=np.uint8)
      
        return label_dst
    
    if self.configer.exists('data', 'reduce_zero_label') and self.configer.get('data', 'reduce_zero_label'):
        label_img = label_img + 1
        label_img = label_img.astype(np.uint8)
    if self.configer.exists('data', 'label_list'):
        label_img_ = self.__relabel(label_img)
    else:
        label_img_ = label_img
    

    for cocostuff dataset (171 num classes), the origin predicted classes range from 0-170, after +1, it range from 1-171, then feed the label_img into __relabel() func. However, the loop in __relabel() range from 0-170, and the class 171 is not be operated.

    opened by chencheng1203 0
  • missing `mmpose/version.py`

    missing `mmpose/version.py`

    Hi,

    When I installed mmpose in this repo, I found there is no mmpose/version.py file.

        Traceback (most recent call last):
          File "<string>", line 1, in <module>
          File "/home/chenshoufa/workspace/HRFormer/pose/setup.py", line 105, in <module>
            version=get_version(),
          File "/home/chenshoufa/workspace/HRFormer/pose/setup.py", line 14, in get_version
            with open(version_file, 'r') as f:
        FileNotFoundError: [Errno 2] No such file or directory: 'mmpose/version.py'
    
    
    opened by ShoufaChen 2
  • Inference speed

    Inference speed

    What is the inference speed for e.g. semantic segmentation using 1024x1024 (referring to table 5)? Measured on GPU of your choice, just to get a feeling?

    opened by UrskaJ 0
Owner
HRNet
Code for pose estimation is available at https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
HRNet
OpenDILab RL Kubernetes Custom Resource and Operator Lib

DI Orchestrator DI Orchestrator is designed to manage DI (Decision Intelligence) jobs using Kubernetes Custom Resource and Operator. Prerequisites A w

OpenDILab 205 Dec 29, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
A TensorFlow implementation of Neural Program Synthesis from Diverse Demonstration Videos

ViZDoom http://vizdoom.cs.put.edu.pl ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is pri

Hyeonwoo Noh 1 Aug 19, 2020
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch .

PyTorch-High-Res-Stereo-Depth-Estimation Python scripts form performing stereo depth estimation using the high res stereo model in PyTorch. Stereo dep

Ibai Gorordo 26 Nov 24, 2022
Learning 3D Part Assembly from a Single Image

Learning 3D Part Assembly from a Single Image This repository contains a PyTorch implementation of the paper: Learning 3D Part Assembly from A Single

18 Dec 21, 2022
'Aligned mixture of latent dynamical systems' (amLDS) for stimulus decoding probabilistic manifold alignment across animals. P. Herrero-Vidal et al. NeurIPS 2021 code.

Across-animal odor decoding by probabilistic manifold alignment (NeurIPS 2021) This repository is the official implementation of aligned mixture of la

Pedro Herrero-Vidal 3 Jul 12, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

168 Nov 29, 2022
Cache Requests in Deta Bases and Echo them with Deta Micros

Deta Echo Cache Leverage the awesome Deta Micros and Deta Base to cache requests and echo them as needed. Stop worrying about slow public APIs or agre

Gingerbreadfork 8 Dec 07, 2021
[CVPR'22] Official PyTorch Implementation of Collaborative Transformers for Grounded Situation Recognition

[CVPR'22] Collaborative Transformers for Grounded Situation Recognition Paper | Model Checkpoint This is the official PyTorch implementation of Collab

Junhyeong Cho 29 Dec 10, 2022
Implement of "Training deep neural networks via direct loss minimization" in PyTorch for 0-1 loss

This is the implementation of "Training deep neural networks via direct loss minimization" published at ICML 2016 in PyTorch. The implementation targe

Cuong Nguyen 1 Jan 18, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
Code for the paper "JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design"

JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural Networks for Inverse Molecular Design This repository contains code for the paper: JA

Aspuru-Guzik group repo 55 Nov 29, 2022
A generalist algorithm for cell and nucleus segmentation.

Cellpose | A generalist algorithm for cell and nucleus segmentation. Cellpose was written by Carsen Stringer and Marius Pachitariu. To learn about Cel

MouseLand 733 Dec 29, 2022
simple artificial intelligence utilities

Simple AI Project home: http://github.com/simpleai-team/simpleai This lib implements many of the artificial intelligence algorithms described on the b

921 Dec 08, 2022
PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models

This is the official implementation of the following paper: Torsten Scholak, Nathan Schucher, Dzmitry Bahdanau. PICARD - Parsing Incrementally for Con

ElementAI 217 Jan 01, 2023
Prior-Guided Multi-View 3D Head Reconstruction

Prior-Guided Head MVS This repository includes some reconstruction results of our IEEE TMM 2021 paper, Prior-Guided Multi-View 3D Head Reconstruction.

11 Aug 17, 2022
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
GDR-Net: Geometry-Guided Direct Regression Network for Monocular 6D Object Pose Estimation. (CVPR 2021)

GDR-Net This repo provides the PyTorch implementation of the work: Gu Wang, Fabian Manhardt, Federico Tombari, Xiangyang Ji. GDR-Net: Geometry-Guided

169 Jan 07, 2023
A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022