Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Related tags

Deep Learningqdtrack
Overview

Quasi-Dense Tracking

This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking.

We present a trailer that consists of method illustrations and tracking visualizations. Take a look!

If you have any questions, please go to Discussions.

Abstract

Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions on the images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning. We can naturally combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack) without turning to displacement regression or motion priors. We also find that the resulting distinctive feature space admits a simple nearest neighbor search at the inference time. Despite its simplicity, QDTrack outperforms all existing methods on MOT, BDD100K, Waymo, and TAO tracking benchmarks. It achieves 68.7 MOTA at 20.3 FPS on MOT17 without using external training data. Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.

Quasi-dense matching

Main results

Without bells and whistles, our method outperforms the states of the art on MOT, BDD100K, Waymo, and TAO benchmarks.

BDD100K test set

mMOTA mIDF1 ID Sw.
35.5 52.3 10790

MOT

Dataset MOTA IDF1 ID Sw. MT ML
MOT16 69.8 67.1 1097 316 150
MOT17 68.7 66.3 3378 957 516

Waymo validation set

Category MOTA IDF1 ID Sw.
Vehicle 55.6 66.2 24309
Pedestrian 50.3 58.4 6347
Cyclist 26.2 45.7 56
All 44.0 56.8 30712

TAO

Split AP50 AP75 AP
val 16.1 5.0 7.0
test 12.4 4.5 5.2

Installation

Please refer to INSTALL.md for installation instructions.

Usages

Please refer to GET_STARTED.md for dataset preparation and running instructions.

We release pretrained models on BDD100K dataset for testing.

More implementations / models on the following benchmarks will be released later:

  • Waymo
  • MOT16 / MOT17 / MOT20
  • TAO

Citation

@InProceedings{qdtrack,
  title = {Quasi-Dense Similarity Learning for Multiple Object Tracking},
  author = {Pang, Jiangmiao and Qiu, Linlu and Li, Xia and Chen, Haofeng and Li, Qi and Darrell, Trevor and Yu, Fisher},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  month = {June},
  year = {2021}
}
Comments
  • TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    Greetings,

    I am currently using qdtrack in python 3.8, Cuda 11 based environment for RTX 3090 GPU. While training as well as testing, I am getting this error. All required datasets in required locations have been downloaded and maintained. Any help would be appreciated.

    opened by AmanGoyal99 12
  • Inconsistent Results on BDD100K Tracking Validation Set

    Inconsistent Results on BDD100K Tracking Validation Set

    Hi there.

    I ran the pre-trained BDD100K model on the tracking validation set and the resulting MOTA IDF1 scores are lower than what QDTrack claim: MOTA: 54.5, IDF1: 66.7 vs your MOTA: 63.5, IDF1 71.5.

    Kindly verify if this is the case for you or if there are any missing settings.

    I followed the instructions and ran this command: sh ./tools/dist_test.sh ./configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ./ckpts/mmdet/qdtrack_frcnn_r50_fpn_12e_bdd100k_13328aed.pth 2 --out exp.pkl --eval track

    opened by taheranjary 8
  • Evaluation results on TAO-val

    Evaluation results on TAO-val

    Hello,

    When I train the model with your code for TAO (i.e., pretrain on LVIS and finetune on TAO-train), I get the following final results on TAO-val. which are lower than the scores reported in the original paper.

    |mAP0.5 | mAP0.75 | mAP[0.5:0.95] | |---------|---------|---------| |13.8 | 5.5 | 6.5 | | 16.1 | 5.0 | 7.0 |

    • above : reproduced // below : original

    Are there any issues that I have to consider for getting the original score?

    Thanks,

    opened by shwoo93 8
  • Training loss/Acc diagram

    Training loss/Acc diagram

    Thanks for the great work!

    I am trying to retrain QDTrack on BDD100k, however, it is converging really slowly (at least for the first epochs). Therefore I wanted to ask, whether it is possible to share your diagrams on training loss and acc?

    Thanks in advance!

    opened by LisaBernhardt 7
  • Unclear which links to pick from BDD website for dataset prep

    Unclear which links to pick from BDD website for dataset prep

    The Readme indicates Detection and Tracking sets, but the site shows 11 options, including: Images, MOT 2020 Labels, MOT 2020 Data, Detection 2020 Labels.

    Also, clicking MOT 2020 Data shows many different options. Should they all be downloaded?

    opened by diesendruck 7
  • about train

    about train

    when I train the net,Epoch 1 ,200/171305 ,the result as follow: lr: 7.992e-03,loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 81.8194, loss_bbox: nan, loss_track: nan, loss_track_aux: nan, loss: nan why?

    opened by ningqing123 6
  • Is customization of backbone possible as mentioned in the mmdet library ?

    Is customization of backbone possible as mentioned in the mmdet library ?

    Kindly let me know if customization of backbone as mentioned in mmdet library could be used with qdtrack as well ?

    LInk : https://github.com/open-mmlab/mmdetection/blob/master/docs/tutorials/customize_models.md#add-a-new-backbone

    opened by AmanGoyal99 5
  • Your BDD100K instructions are unclear

    Your BDD100K instructions are unclear

    This is what you are saying:

    
    On the official download page, the required data and annotations are
    
    detection set images: Images
    detection set annotations: Detection 2020 Labels
    tracking set images: MOT 2020 Data
    tracking set annotations: MOT 2020 Labels
    

    But there is no Images or MOT 2020 Data on the official website for BDD

    opened by ghost 5
  • I'm confusing with the meaning of auxiliary loss

    I'm confusing with the meaning of auxiliary loss

    Hi , thanks for your great work. According to the paper, There is an auliliary loss, I do not really understand the intuition of this loss. 螢幕擷取畫面 (9)

    Can you give me some more explanation of this loss? Thanks.

    opened by hcv1027 4
  • RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    Thank you for your paper and this repo! I would like to test your pretrained model on the BDD100k dataset. Therefore I followed the instructions (https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md) - downloaded BDD100k, converted annotations as described and stored everything as your folder structure suggests.

    I used 'single-gpu testing' in the chapter 'Test a Model' and executed the following command in the terminal: python tools/test.py ${QDTrack}/configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ${QDTrack}/pretrained_models/qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth --out testrun_01.pkl --eval track --show-dir ${QDTrack}/data/results

    ${QDTrack} = indicates the path on my machine to qdtrack

    I get the following error: image

    Could you please help me solving this issue. Thanks a lot!

    opened by LisaBernhardt 4
  • about MOT17: loss_track degrades to zero after 50 iterations

    about MOT17: loss_track degrades to zero after 50 iterations

    Thanks for your great work! I'm now trying to run qdtrack on MOT17. I find the detection part went well during training and reached a reasonable mAP score. But, the loss of the quasi-dense embedding part degraded fastly to zero within 100 iterations, and obtained very low MOTA, MOTP, IDF1, etc., after training. Note that I modified nothing except the code related to dataset, which I've checked carefully thus I believe is not the reason. Should I modify the settings of quasi-dense embedding head to make it work? Do you have any suggestions? Thank you very much!

    opened by wswdx 4
  • detector freeze problem

    detector freeze problem

    Hi.

    I'm going to freeze the parameters of detector as you say(https://github.com/SysCV/qdtrack/issues/126).

    In qdtrack/models/mot/qdtrack.py, I tried to freeze the detector using freeze_detector(freeze_detector = True). But, when freeze_detector = True, self.detector, I got this error.

    Traceback (most recent call last): File "tools/train.py", line 169, in main() File "tools/train.py", line 140, in main test_cfg=cfg.get('test_cfg')) File "/workspace/qdtrack/qdtrack/models/builder.py", line 15, in build_model return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/workspace/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/workspace/mmcv/mmcv/utils/registry.py", line 72, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') AttributeError: QDTrack: 'QDTrack' object has no attribute 'backbone'

    image

    Here is the config file I used. image

    I think, image be caused by self.detector.

    How can I put the backbone and neck, rpn_head, roi_head.bbox_head of the detector config file(/configs/base/faster_rcnn_r50_fpn.py) in self.detector?

    Thank you.

    opened by YOOHYOJEONG 1
  • Can I train only tracker?

    Can I train only tracker?

    Hi.

    I trained the detector using mmcv. And, I want to use the detector checkpoint trained using mmcv for the detector of qdtrack without any additional detector learning. In this case, Can I train only tracker of qdtrack?

    If I enter trained checkpoint using mmcv in init_cfg=dict(checkpoint='') in detecotr, is it the same as training only the tracker I mentioned?

    Thank you.

    opened by YOOHYOJEONG 2
  • The model and loaded state dict do not match exactly

    The model and loaded state dict do not match exactly

    Hi,

    Thanks for open-sourcing the code of your great work! Looks like there are some bugs when running the current tools/inference.py.

    When using the configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py as config and qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth as the checkpoint (from the google drive link provided in the README file), the model and loaded state dict do not match exactly. Looks like you updated the name of layers but didn't update the definition of the pre-trained model. Manually changing the layer names in the .pth file will work.

    The model and loaded state dict do not match exactly
    
    unexpected key in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.bn1.num_batches_tracked, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.bn1.num_batches_tracked, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.bn2.num_batches_tracked, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.bn3.num_batches_tracked, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.0.downsample.1.num_batches_tracked, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.bn1.num_batches_tracked, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.bn2.num_batches_tracked, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.1.bn3.num_batches_tracked, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.bn1.num_batches_tracked, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.bn2.num_batches_tracked, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer1.2.bn3.num_batches_tracked, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.bn1.num_batches_tracked, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.bn2.num_batches_tracked, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.bn3.num_batches_tracked, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.0.downsample.1.num_batches_tracked, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.bn1.num_batches_tracked, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.bn2.num_batches_tracked, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.1.bn3.num_batches_tracked, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.bn1.num_batches_tracked, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.bn2.num_batches_tracked, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.2.bn3.num_batches_tracked, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.bn1.num_batches_tracked, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.bn2.num_batches_tracked, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer2.3.bn3.num_batches_tracked, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.bn1.num_batches_tracked, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.bn2.num_batches_tracked, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.bn3.num_batches_tracked, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.0.downsample.1.num_batches_tracked, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.bn1.num_batches_tracked, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.bn2.num_batches_tracked, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.1.bn3.num_batches_tracked, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.bn1.num_batches_tracked, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.bn2.num_batches_tracked, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.2.bn3.num_batches_tracked, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.bn1.num_batches_tracked, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.bn2.num_batches_tracked, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.3.bn3.num_batches_tracked, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.bn1.num_batches_tracked, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.bn2.num_batches_tracked, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.4.bn3.num_batches_tracked, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.bn1.num_batches_tracked, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.bn2.num_batches_tracked, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer3.5.bn3.num_batches_tracked, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.bn1.num_batches_tracked, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.bn2.num_batches_tracked, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.bn3.num_batches_tracked, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.0.downsample.1.num_batches_tracked, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.bn1.num_batches_tracked, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.bn2.num_batches_tracked, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.1.bn3.num_batches_tracked, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.bn1.num_batches_tracked, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.bn2.num_batches_tracked, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, backbone.layer4.2.bn3.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.lateral_convs.3.conv.weight, neck.lateral_convs.3.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, neck.fpn_convs.3.conv.weight, neck.fpn_convs.3.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, roi_head.bbox_head.fc_cls.weight, roi_head.bbox_head.fc_cls.bias, roi_head.bbox_head.fc_reg.weight, roi_head.bbox_head.fc_reg.bias, roi_head.bbox_head.shared_fcs.0.weight, roi_head.bbox_head.shared_fcs.0.bias, roi_head.bbox_head.shared_fcs.1.weight, roi_head.bbox_head.shared_fcs.1.bias, roi_head.track_head.convs.0.conv.weight, roi_head.track_head.convs.0.gn.weight, roi_head.track_head.convs.0.gn.bias, roi_head.track_head.convs.1.conv.weight, roi_head.track_head.convs.1.gn.weight, roi_head.track_head.convs.1.gn.bias, roi_head.track_head.convs.2.conv.weight, roi_head.track_head.convs.2.gn.weight, roi_head.track_head.convs.2.gn.bias, roi_head.track_head.convs.3.conv.weight, roi_head.track_head.convs.3.gn.weight, roi_head.track_head.convs.3.gn.bias, roi_head.track_head.fcs.0.weight, roi_head.track_head.fcs.0.bias, roi_head.track_head.fc_embed.weight, roi_head.track_head.fc_embed.bias
    
    missing keys in source state_dict: detector.backbone.conv1.weight, detector.backbone.bn1.weight, detector.backbone.bn1.bias, detector.backbone.bn1.running_mean, detector.backbone.bn1.running_var, detector.backbone.layer1.0.conv1.weight, detector.backbone.layer1.0.bn1.weight, detector.backbone.layer1.0.bn1.bias, detector.backbone.layer1.0.bn1.running_mean, detector.backbone.layer1.0.bn1.running_var, detector.backbone.layer1.0.conv2.weight, detector.backbone.layer1.0.bn2.weight, detector.backbone.layer1.0.bn2.bias, detector.backbone.layer1.0.bn2.running_mean, detector.backbone.layer1.0.bn2.running_var, detector.backbone.layer1.0.conv3.weight, detector.backbone.layer1.0.bn3.weight, detector.backbone.layer1.0.bn3.bias, detector.backbone.layer1.0.bn3.running_mean, detector.backbone.layer1.0.bn3.running_var, detector.backbone.layer1.0.downsample.0.weight, detector.backbone.layer1.0.downsample.1.weight, detector.backbone.layer1.0.downsample.1.bias, detector.backbone.layer1.0.downsample.1.running_mean, detector.backbone.layer1.0.downsample.1.running_var, detector.backbone.layer1.1.conv1.weight, detector.backbone.layer1.1.bn1.weight, detector.backbone.layer1.1.bn1.bias, detector.backbone.layer1.1.bn1.running_mean, detector.backbone.layer1.1.bn1.running_var, detector.backbone.layer1.1.conv2.weight, detector.backbone.layer1.1.bn2.weight, detector.backbone.layer1.1.bn2.bias, detector.backbone.layer1.1.bn2.running_mean, detector.backbone.layer1.1.bn2.running_var, detector.backbone.layer1.1.conv3.weight, detector.backbone.layer1.1.bn3.weight, detector.backbone.layer1.1.bn3.bias, detector.backbone.layer1.1.bn3.running_mean, detector.backbone.layer1.1.bn3.running_var, detector.backbone.layer1.2.conv1.weight, detector.backbone.layer1.2.bn1.weight, detector.backbone.layer1.2.bn1.bias, detector.backbone.layer1.2.bn1.running_mean, detector.backbone.layer1.2.bn1.running_var, detector.backbone.layer1.2.conv2.weight, detector.backbone.layer1.2.bn2.weight, detector.backbone.layer1.2.bn2.bias, detector.backbone.layer1.2.bn2.running_mean, detector.backbone.layer1.2.bn2.running_var, detector.backbone.layer1.2.conv3.weight, detector.backbone.layer1.2.bn3.weight, detector.backbone.layer1.2.bn3.bias, detector.backbone.layer1.2.bn3.running_mean, detector.backbone.layer1.2.bn3.running_var, detector.backbone.layer2.0.conv1.weight, detector.backbone.layer2.0.bn1.weight, detector.backbone.layer2.0.bn1.bias, detector.backbone.layer2.0.bn1.running_mean, detector.backbone.layer2.0.bn1.running_var, detector.backbone.layer2.0.conv2.weight, detector.backbone.layer2.0.bn2.weight, detector.backbone.layer2.0.bn2.bias, detector.backbone.layer2.0.bn2.running_mean, detector.backbone.layer2.0.bn2.running_var, detector.backbone.layer2.0.conv3.weight, detector.backbone.layer2.0.bn3.weight, detector.backbone.layer2.0.bn3.bias, detector.backbone.layer2.0.bn3.running_mean, detector.backbone.layer2.0.bn3.running_var, detector.backbone.layer2.0.downsample.0.weight, detector.backbone.layer2.0.downsample.1.weight, detector.backbone.layer2.0.downsample.1.bias, detector.backbone.layer2.0.downsample.1.running_mean, detector.backbone.layer2.0.downsample.1.running_var, detector.backbone.layer2.1.conv1.weight, detector.backbone.layer2.1.bn1.weight, detector.backbone.layer2.1.bn1.bias, detector.backbone.layer2.1.bn1.running_mean, detector.backbone.layer2.1.bn1.running_var, detector.backbone.layer2.1.conv2.weight, detector.backbone.layer2.1.bn2.weight, detector.backbone.layer2.1.bn2.bias, detector.backbone.layer2.1.bn2.running_mean, detector.backbone.layer2.1.bn2.running_var, detector.backbone.layer2.1.conv3.weight, detector.backbone.layer2.1.bn3.weight, detector.backbone.layer2.1.bn3.bias, detector.backbone.layer2.1.bn3.running_mean, detector.backbone.layer2.1.bn3.running_var, detector.backbone.layer2.2.conv1.weight, detector.backbone.layer2.2.bn1.weight, detector.backbone.layer2.2.bn1.bias, detector.backbone.layer2.2.bn1.running_mean, detector.backbone.layer2.2.bn1.running_var, detector.backbone.layer2.2.conv2.weight, detector.backbone.layer2.2.bn2.weight, detector.backbone.layer2.2.bn2.bias, detector.backbone.layer2.2.bn2.running_mean, detector.backbone.layer2.2.bn2.running_var, detector.backbone.layer2.2.conv3.weight, detector.backbone.layer2.2.bn3.weight, detector.backbone.layer2.2.bn3.bias, detector.backbone.layer2.2.bn3.running_mean, detector.backbone.layer2.2.bn3.running_var, detector.backbone.layer2.3.conv1.weight, detector.backbone.layer2.3.bn1.weight, detector.backbone.layer2.3.bn1.bias, detector.backbone.layer2.3.bn1.running_mean, detector.backbone.layer2.3.bn1.running_var, detector.backbone.layer2.3.conv2.weight, detector.backbone.layer2.3.bn2.weight, detector.backbone.layer2.3.bn2.bias, detector.backbone.layer2.3.bn2.running_mean, detector.backbone.layer2.3.bn2.running_var, detector.backbone.layer2.3.conv3.weight, detector.backbone.layer2.3.bn3.weight, detector.backbone.layer2.3.bn3.bias, detector.backbone.layer2.3.bn3.running_mean, detector.backbone.layer2.3.bn3.running_var, detector.backbone.layer3.0.conv1.weight, detector.backbone.layer3.0.bn1.weight, detector.backbone.layer3.0.bn1.bias, detector.backbone.layer3.0.bn1.running_mean, detector.backbone.layer3.0.bn1.running_var, detector.backbone.layer3.0.conv2.weight, detector.backbone.layer3.0.bn2.weight, detector.backbone.layer3.0.bn2.bias, detector.backbone.layer3.0.bn2.running_mean, detector.backbone.layer3.0.bn2.running_var, detector.backbone.layer3.0.conv3.weight, detector.backbone.layer3.0.bn3.weight, detector.backbone.layer3.0.bn3.bias, detector.backbone.layer3.0.bn3.running_mean, detector.backbone.layer3.0.bn3.running_var, detector.backbone.layer3.0.downsample.0.weight, detector.backbone.layer3.0.downsample.1.weight, detector.backbone.layer3.0.downsample.1.bias, detector.backbone.layer3.0.downsample.1.running_mean, detector.backbone.layer3.0.downsample.1.running_var, detector.backbone.layer3.1.conv1.weight, detector.backbone.layer3.1.bn1.weight, detector.backbone.layer3.1.bn1.bias, detector.backbone.layer3.1.bn1.running_mean, detector.backbone.layer3.1.bn1.running_var, detector.backbone.layer3.1.conv2.weight, detector.backbone.layer3.1.bn2.weight, detector.backbone.layer3.1.bn2.bias, detector.backbone.layer3.1.bn2.running_mean, detector.backbone.layer3.1.bn2.running_var, detector.backbone.layer3.1.conv3.weight, detector.backbone.layer3.1.bn3.weight, detector.backbone.layer3.1.bn3.bias, detector.backbone.layer3.1.bn3.running_mean, detector.backbone.layer3.1.bn3.running_var, detector.backbone.layer3.2.conv1.weight, detector.backbone.layer3.2.bn1.weight, detector.backbone.layer3.2.bn1.bias, detector.backbone.layer3.2.bn1.running_mean, detector.backbone.layer3.2.bn1.running_var, detector.backbone.layer3.2.conv2.weight, detector.backbone.layer3.2.bn2.weight, detector.backbone.layer3.2.bn2.bias, detector.backbone.layer3.2.bn2.running_mean, detector.backbone.layer3.2.bn2.running_var, detector.backbone.layer3.2.conv3.weight, detector.backbone.layer3.2.bn3.weight, detector.backbone.layer3.2.bn3.bias, detector.backbone.layer3.2.bn3.running_mean, detector.backbone.layer3.2.bn3.running_var, detector.backbone.layer3.3.conv1.weight, detector.backbone.layer3.3.bn1.weight, detector.backbone.layer3.3.bn1.bias, detector.backbone.layer3.3.bn1.running_mean, detector.backbone.layer3.3.bn1.running_var, detector.backbone.layer3.3.conv2.weight, detector.backbone.layer3.3.bn2.weight, detector.backbone.layer3.3.bn2.bias, detector.backbone.layer3.3.bn2.running_mean, detector.backbone.layer3.3.bn2.running_var, detector.backbone.layer3.3.conv3.weight, detector.backbone.layer3.3.bn3.weight, detector.backbone.layer3.3.bn3.bias, detector.backbone.layer3.3.bn3.running_mean, detector.backbone.layer3.3.bn3.running_var, detector.backbone.layer3.4.conv1.weight, detector.backbone.layer3.4.bn1.weight, detector.backbone.layer3.4.bn1.bias, detector.backbone.layer3.4.bn1.running_mean, detector.backbone.layer3.4.bn1.running_var, detector.backbone.layer3.4.conv2.weight, detector.backbone.layer3.4.bn2.weight, detector.backbone.layer3.4.bn2.bias, detector.backbone.layer3.4.bn2.running_mean, detector.backbone.layer3.4.bn2.running_var, detector.backbone.layer3.4.conv3.weight, detector.backbone.layer3.4.bn3.weight, detector.backbone.layer3.4.bn3.bias, detector.backbone.layer3.4.bn3.running_mean, detector.backbone.layer3.4.bn3.running_var, detector.backbone.layer3.5.conv1.weight, detector.backbone.layer3.5.bn1.weight, detector.backbone.layer3.5.bn1.bias, detector.backbone.layer3.5.bn1.running_mean, detector.backbone.layer3.5.bn1.running_var, detector.backbone.layer3.5.conv2.weight, detector.backbone.layer3.5.bn2.weight, detector.backbone.layer3.5.bn2.bias, detector.backbone.layer3.5.bn2.running_mean, detector.backbone.layer3.5.bn2.running_var, detector.backbone.layer3.5.conv3.weight, detector.backbone.layer3.5.bn3.weight, detector.backbone.layer3.5.bn3.bias, detector.backbone.layer3.5.bn3.running_mean, detector.backbone.layer3.5.bn3.running_var, detector.backbone.layer4.0.conv1.weight, detector.backbone.layer4.0.bn1.weight, detector.backbone.layer4.0.bn1.bias, detector.backbone.layer4.0.bn1.running_mean, detector.backbone.layer4.0.bn1.running_var, detector.backbone.layer4.0.conv2.weight, detector.backbone.layer4.0.bn2.weight, detector.backbone.layer4.0.bn2.bias, detector.backbone.layer4.0.bn2.running_mean, detector.backbone.layer4.0.bn2.running_var, detector.backbone.layer4.0.conv3.weight, detector.backbone.layer4.0.bn3.weight, detector.backbone.layer4.0.bn3.bias, detector.backbone.layer4.0.bn3.running_mean, detector.backbone.layer4.0.bn3.running_var, detector.backbone.layer4.0.downsample.0.weight, detector.backbone.layer4.0.downsample.1.weight, detector.backbone.layer4.0.downsample.1.bias, detector.backbone.layer4.0.downsample.1.running_mean, detector.backbone.layer4.0.downsample.1.running_var, detector.backbone.layer4.1.conv1.weight, detector.backbone.layer4.1.bn1.weight, detector.backbone.layer4.1.bn1.bias, detector.backbone.layer4.1.bn1.running_mean, detector.backbone.layer4.1.bn1.running_var, detector.backbone.layer4.1.conv2.weight, detector.backbone.layer4.1.bn2.weight, detector.backbone.layer4.1.bn2.bias, detector.backbone.layer4.1.bn2.running_mean, detector.backbone.layer4.1.bn2.running_var, detector.backbone.layer4.1.conv3.weight, detector.backbone.layer4.1.bn3.weight, detector.backbone.layer4.1.bn3.bias, detector.backbone.layer4.1.bn3.running_mean, detector.backbone.layer4.1.bn3.running_var, detector.backbone.layer4.2.conv1.weight, detector.backbone.layer4.2.bn1.weight, detector.backbone.layer4.2.bn1.bias, detector.backbone.layer4.2.bn1.running_mean, detector.backbone.layer4.2.bn1.running_var, detector.backbone.layer4.2.conv2.weight, detector.backbone.layer4.2.bn2.weight, detector.backbone.layer4.2.bn2.bias, detector.backbone.layer4.2.bn2.running_mean, detector.backbone.layer4.2.bn2.running_var, detector.backbone.layer4.2.conv3.weight, detector.backbone.layer4.2.bn3.weight, detector.backbone.layer4.2.bn3.bias, detector.backbone.layer4.2.bn3.running_mean, detector.backbone.layer4.2.bn3.running_var, detector.neck.lateral_convs.0.conv.weight, detector.neck.lateral_convs.0.conv.bias, detector.neck.lateral_convs.1.conv.weight, detector.neck.lateral_convs.1.conv.bias, detector.neck.lateral_convs.2.conv.weight, detector.neck.lateral_convs.2.conv.bias, detector.neck.lateral_convs.3.conv.weight, detector.neck.lateral_convs.3.conv.bias, detector.neck.fpn_convs.0.conv.weight, detector.neck.fpn_convs.0.conv.bias, detector.neck.fpn_convs.1.conv.weight, detector.neck.fpn_convs.1.conv.bias, detector.neck.fpn_convs.2.conv.weight, detector.neck.fpn_convs.2.conv.bias, detector.neck.fpn_convs.3.conv.weight, detector.neck.fpn_convs.3.conv.bias, detector.rpn_head.rpn_conv.weight, detector.rpn_head.rpn_conv.bias, detector.rpn_head.rpn_cls.weight, detector.rpn_head.rpn_cls.bias, detector.rpn_head.rpn_reg.weight, detector.rpn_head.rpn_reg.bias, detector.roi_head.bbox_head.fc_cls.weight, detector.roi_head.bbox_head.fc_cls.bias, detector.roi_head.bbox_head.fc_reg.weight, detector.roi_head.bbox_head.fc_reg.bias, detector.roi_head.bbox_head.shared_fcs.0.weight, detector.roi_head.bbox_head.shared_fcs.0.bias, detector.roi_head.bbox_head.shared_fcs.1.weight, detector.roi_head.bbox_head.shared_fcs.1.bias, track_head.track_head.convs.0.conv.weight, track_head.track_head.convs.0.gn.weight, track_head.track_head.convs.0.gn.bias, track_head.track_head.convs.1.conv.weight, track_head.track_head.convs.1.gn.weight, track_head.track_head.convs.1.gn.bias, track_head.track_head.convs.2.conv.weight, track_head.track_head.convs.2.gn.weight, track_head.track_head.convs.2.gn.bias, track_head.track_head.convs.3.conv.weight, track_head.track_head.convs.3.gn.weight, track_head.track_head.convs.3.gn.bias, track_head.track_head.fcs.0.weight, track_head.track_head.fcs.0.bias, track_head.track_head.fc_embed.weight, track_head.track_head.fc_embed.bias
    
    opened by yimingzhou1 1
  • BDD100k det conversion error

    BDD100k det conversion error

    When I try to run this command: python -m bdd100k.label.to_coco -m det -i bdd100k/labels/det_20/det_train.json -o data/bdd/labels/det_20/det_train_cocofmt.json I receive the following error:

    [2022-09-23 16:25:55,619 to_coco.py:301 main] Mode: det remove-ignore: False ignore-as-class: False [2022-09-23 16:25:55,619 to_coco.py:307 main] Loading annotations... [2022-09-23 16:26:02,429 to_coco.py:318 main] Converting annotations... 10%|████████ | 6879/69863 [00:00<00:08, 7435.14it/s] Traceback (most recent call last): File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 337, in main() File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 322, in main coco = bdd100k2coco_det( File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 145, in bdd100k2coco_det if frame["labels"]: KeyError: 'labels'

    This error does not occur when running with ${SET_NAME} equal to val

    opened by IMcDougall 0
  • The reference image and key image are exactly the same

    The reference image and key image are exactly the same

    In the article (QDTrack), the difference between the key image and the reference image is indicated by the image below.

    Screenshot 2022-09-05 113254

    However, when debugging the training code, I saw that the reference image metadata and key image metadata returned by the data loader are exactly the same.

    Screenshot 2022-09-05 103058

    Do I need to change a parameter before starting training or is this an error in the code? I would be glad if you inform me.

    opened by Hcayirli 4
Releases(v0.1)
Owner
ETH VIS Research Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Research Group
text_recognition_toolbox: The reimplementation of a series of classical scene text recognition papers with Pytorch in a uniform way.

text recognition toolbox 1. 项目介绍 该项目是基于pytorch深度学习框架,以统一的改写方式实现了以下6篇经典的文字识别论文,论文的详情如下。该项目会持续进行更新,欢迎大家提出问题以及对代码进行贡献。 模型 论文标题 发表年份 模型方法划分 CRNN 《An End-t

168 Dec 24, 2022
Connecting Java/ImgLib2 + Python/NumPy

imglyb imglyb aims at connecting two worlds that have been seperated for too long: Python with numpy Java with ImgLib2 imglyb uses jpype to access num

ImgLib2 29 Dec 21, 2022
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Vikrant Deshpande 1 Nov 17, 2022
ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
Head and Neck Tumour Segmentation and Prediction of Patient Survival Project

Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival Welcome to the Head and Neck Tumour Segmentation and Prediction of Patient Surviv

5 Oct 20, 2022
An implementation of the AdaOPS (Adaptive Online Packing-based Search), which is an online POMDP Solver used to solve problems defined with the POMDPs.jl generative interface.

AdaOPS An implementation of the AdaOPS (Adaptive Online Packing-guided Search), which is an online POMDP Solver used to solve problems defined with th

9 Oct 05, 2022
A SAT-based sudoku solver

SAT Sudoku solver A SAT-based Sudoku solver made in the context of a small project in the "Logic Problem Solving" class in the first year at the Polyt

Alexandre Malfreyt 5 Apr 15, 2022
An end-to-end regression problem of predicting the price of properties in Bangalore.

Bangalore-House-Price-Prediction An end-to-end regression problem of predicting the price of properties in Bangalore. Deployed in Heroku using Flask.

Shruti Balan 1 Nov 25, 2022
Hl classification bc - A Network-Based High-Level Data Classification Algorithm Using Betweenness Centrality

A Network-Based High-Level Data Classification Algorithm Using Betweenness Centr

Esteban Vilca 3 Dec 01, 2022
Pytorch implementation of Implicit Behavior Cloning.

Implicit Behavior Cloning - PyTorch (wip) Pytorch implementation of Implicit Behavior Cloning. Install conda create -n ibc python=3.8 pip install -r r

Kevin Zakka 49 Dec 25, 2022
Code and models for "Pano3D: A Holistic Benchmark and a Solid Baseline for 360 Depth Estimation", OmniCV Workshop @ CVPR21.

Pano3D A Holistic Benchmark and a Solid Baseline for 360o Depth Estimation Pano3D is a new benchmark for depth estimation from spherical panoramas. We

Visual Computing Lab, Information Technologies Institute, Centre for Reseach and Technology Hellas 50 Dec 29, 2022
Implementation of GGB color space

GGB Color Space This package is implementation of GGB color space from Development of a Robust Algorithm for Detection of Nuclei and Classification of

Resha Dwika Hefni Al-Fahsi 2 Oct 06, 2021
SemiNAS: Semi-Supervised Neural Architecture Search

SemiNAS: Semi-Supervised Neural Architecture Search This repository contains the code used for Semi-Supervised Neural Architecture Search, by Renqian

Renqian Luo 21 Aug 31, 2022
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Paddle-PANet 目录 结果对比 论文介绍 快速安装 结果对比 CTW1500 Method Backbone Fine

7 Aug 08, 2022
UV matrix decompostion using movielens dataset

UV-matrix-decompostion-with-kfold UV matrix decompostion using movielens dataset upload the 'ratings.dat' file install the following python libraries

2 Oct 18, 2022
Implementation of Axial attention - attending to multi-dimensional data efficiently

Axial Attention Implementation of Axial attention in Pytorch. A simple but powerful technique to attend to multi-dimensional data efficiently. It has

Phil Wang 250 Dec 25, 2022
Code for our NeurIPS 2021 paper: Sparsely Changing Latent States for Prediction and Planning in Partially Observable Domains

GateL0RD This is a lightweight PyTorch implementation of GateL0RD, our RNN presented in "Sparsely Changing Latent States for Prediction and Planning i

Autonomous Learning Group 16 Nov 03, 2022