Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Related tags

Deep Learningqdtrack
Overview

Quasi-Dense Tracking

This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking.

We present a trailer that consists of method illustrations and tracking visualizations. Take a look!

If you have any questions, please go to Discussions.

Abstract

Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions on the images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning. We can naturally combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack) without turning to displacement regression or motion priors. We also find that the resulting distinctive feature space admits a simple nearest neighbor search at the inference time. Despite its simplicity, QDTrack outperforms all existing methods on MOT, BDD100K, Waymo, and TAO tracking benchmarks. It achieves 68.7 MOTA at 20.3 FPS on MOT17 without using external training data. Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets.

Quasi-dense matching

Main results

Without bells and whistles, our method outperforms the states of the art on MOT, BDD100K, Waymo, and TAO benchmarks.

BDD100K test set

mMOTA mIDF1 ID Sw.
35.5 52.3 10790

MOT

Dataset MOTA IDF1 ID Sw. MT ML
MOT16 69.8 67.1 1097 316 150
MOT17 68.7 66.3 3378 957 516

Waymo validation set

Category MOTA IDF1 ID Sw.
Vehicle 55.6 66.2 24309
Pedestrian 50.3 58.4 6347
Cyclist 26.2 45.7 56
All 44.0 56.8 30712

TAO

Split AP50 AP75 AP
val 16.1 5.0 7.0
test 12.4 4.5 5.2

Installation

Please refer to INSTALL.md for installation instructions.

Usages

Please refer to GET_STARTED.md for dataset preparation and running instructions.

We release pretrained models on BDD100K dataset for testing.

More implementations / models on the following benchmarks will be released later:

  • Waymo
  • MOT16 / MOT17 / MOT20
  • TAO

Citation

@InProceedings{qdtrack,
  title = {Quasi-Dense Similarity Learning for Multiple Object Tracking},
  author = {Pang, Jiangmiao and Qiu, Linlu and Li, Xia and Chen, Haofeng and Li, Qi and Darrell, Trevor and Yu, Fisher},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  month = {June},
  year = {2021}
}
Comments
  • TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    TypeError : Resnet : __init__() got an unexpected keyword argument 'init_cfg'

    Greetings,

    I am currently using qdtrack in python 3.8, Cuda 11 based environment for RTX 3090 GPU. While training as well as testing, I am getting this error. All required datasets in required locations have been downloaded and maintained. Any help would be appreciated.

    opened by AmanGoyal99 12
  • Inconsistent Results on BDD100K Tracking Validation Set

    Inconsistent Results on BDD100K Tracking Validation Set

    Hi there.

    I ran the pre-trained BDD100K model on the tracking validation set and the resulting MOTA IDF1 scores are lower than what QDTrack claim: MOTA: 54.5, IDF1: 66.7 vs your MOTA: 63.5, IDF1 71.5.

    Kindly verify if this is the case for you or if there are any missing settings.

    I followed the instructions and ran this command: sh ./tools/dist_test.sh ./configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ./ckpts/mmdet/qdtrack_frcnn_r50_fpn_12e_bdd100k_13328aed.pth 2 --out exp.pkl --eval track

    opened by taheranjary 8
  • Evaluation results on TAO-val

    Evaluation results on TAO-val

    Hello,

    When I train the model with your code for TAO (i.e., pretrain on LVIS and finetune on TAO-train), I get the following final results on TAO-val. which are lower than the scores reported in the original paper.

    |mAP0.5 | mAP0.75 | mAP[0.5:0.95] | |---------|---------|---------| |13.8 | 5.5 | 6.5 | | 16.1 | 5.0 | 7.0 |

    • above : reproduced // below : original

    Are there any issues that I have to consider for getting the original score?

    Thanks,

    opened by shwoo93 8
  • Training loss/Acc diagram

    Training loss/Acc diagram

    Thanks for the great work!

    I am trying to retrain QDTrack on BDD100k, however, it is converging really slowly (at least for the first epochs). Therefore I wanted to ask, whether it is possible to share your diagrams on training loss and acc?

    Thanks in advance!

    opened by LisaBernhardt 7
  • Unclear which links to pick from BDD website for dataset prep

    Unclear which links to pick from BDD website for dataset prep

    The Readme indicates Detection and Tracking sets, but the site shows 11 options, including: Images, MOT 2020 Labels, MOT 2020 Data, Detection 2020 Labels.

    Also, clicking MOT 2020 Data shows many different options. Should they all be downloaded?

    opened by diesendruck 7
  • about train

    about train

    when I train the net,Epoch 1 ,200/171305 ,the result as follow: lr: 7.992e-03,loss_rpn_cls: nan, loss_rpn_bbox: nan, loss_cls: nan, acc: 81.8194, loss_bbox: nan, loss_track: nan, loss_track_aux: nan, loss: nan why?

    opened by ningqing123 6
  • Is customization of backbone possible as mentioned in the mmdet library ?

    Is customization of backbone possible as mentioned in the mmdet library ?

    Kindly let me know if customization of backbone as mentioned in mmdet library could be used with qdtrack as well ?

    LInk : https://github.com/open-mmlab/mmdetection/blob/master/docs/tutorials/customize_models.md#add-a-new-backbone

    opened by AmanGoyal99 5
  • Your BDD100K instructions are unclear

    Your BDD100K instructions are unclear

    This is what you are saying:

    
    On the official download page, the required data and annotations are
    
    detection set images: Images
    detection set annotations: Detection 2020 Labels
    tracking set images: MOT 2020 Data
    tracking set annotations: MOT 2020 Labels
    

    But there is no Images or MOT 2020 Data on the official website for BDD

    opened by ghost 5
  • I'm confusing with the meaning of auxiliary loss

    I'm confusing with the meaning of auxiliary loss

    Hi , thanks for your great work. According to the paper, There is an auliliary loss, I do not really understand the intuition of this loss. 螢幕擷取畫面 (9)

    Can you give me some more explanation of this loss? Thanks.

    opened by hcv1027 4
  • RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    Thank you for your paper and this repo! I would like to test your pretrained model on the BDD100k dataset. Therefore I followed the instructions (https://github.com/SysCV/qdtrack/blob/master/docs/GET_STARTED.md) - downloaded BDD100k, converted annotations as described and stored everything as your folder structure suggests.

    I used 'single-gpu testing' in the chapter 'Test a Model' and executed the following command in the terminal: python tools/test.py ${QDTrack}/configs/qdtrack-frcnn_r50_fpn_12e_bdd100k.py ${QDTrack}/pretrained_models/qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth --out testrun_01.pkl --eval track --show-dir ${QDTrack}/data/results

    ${QDTrack} = indicates the path on my machine to qdtrack

    I get the following error: image

    Could you please help me solving this issue. Thanks a lot!

    opened by LisaBernhardt 4
  • about MOT17: loss_track degrades to zero after 50 iterations

    about MOT17: loss_track degrades to zero after 50 iterations

    Thanks for your great work! I'm now trying to run qdtrack on MOT17. I find the detection part went well during training and reached a reasonable mAP score. But, the loss of the quasi-dense embedding part degraded fastly to zero within 100 iterations, and obtained very low MOTA, MOTP, IDF1, etc., after training. Note that I modified nothing except the code related to dataset, which I've checked carefully thus I believe is not the reason. Should I modify the settings of quasi-dense embedding head to make it work? Do you have any suggestions? Thank you very much!

    opened by wswdx 4
  • detector freeze problem

    detector freeze problem

    Hi.

    I'm going to freeze the parameters of detector as you say(https://github.com/SysCV/qdtrack/issues/126).

    In qdtrack/models/mot/qdtrack.py, I tried to freeze the detector using freeze_detector(freeze_detector = True). But, when freeze_detector = True, self.detector, I got this error.

    Traceback (most recent call last): File "tools/train.py", line 169, in main() File "tools/train.py", line 140, in main test_cfg=cfg.get('test_cfg')) File "/workspace/qdtrack/qdtrack/models/builder.py", line 15, in build_model return build(cfg, MODELS, dict(train_cfg=train_cfg, test_cfg=test_cfg)) File "/workspace/mmcv/mmcv/cnn/builder.py", line 27, in build_model_from_cfg return build_from_cfg(cfg, registry, default_args) File "/workspace/mmcv/mmcv/utils/registry.py", line 72, in build_from_cfg raise type(e)(f'{obj_cls.name}: {e}') AttributeError: QDTrack: 'QDTrack' object has no attribute 'backbone'

    image

    Here is the config file I used. image

    I think, image be caused by self.detector.

    How can I put the backbone and neck, rpn_head, roi_head.bbox_head of the detector config file(/configs/base/faster_rcnn_r50_fpn.py) in self.detector?

    Thank you.

    opened by YOOHYOJEONG 1
  • Can I train only tracker?

    Can I train only tracker?

    Hi.

    I trained the detector using mmcv. And, I want to use the detector checkpoint trained using mmcv for the detector of qdtrack without any additional detector learning. In this case, Can I train only tracker of qdtrack?

    If I enter trained checkpoint using mmcv in init_cfg=dict(checkpoint='') in detecotr, is it the same as training only the tracker I mentioned?

    Thank you.

    opened by YOOHYOJEONG 2
  • The model and loaded state dict do not match exactly

    The model and loaded state dict do not match exactly

    Hi,

    Thanks for open-sourcing the code of your great work! Looks like there are some bugs when running the current tools/inference.py.

    When using the configs/bdd100k/qdtrack-frcnn_r50_fpn_12e_bdd100k.py as config and qdtrack-frcnn_r50_fpn_12e_bdd100k-13328aed.pth as the checkpoint (from the google drive link provided in the README file), the model and loaded state dict do not match exactly. Looks like you updated the name of layers but didn't update the definition of the pre-trained model. Manually changing the layer names in the .pth file will work.

    The model and loaded state dict do not match exactly
    
    unexpected key in source state_dict: backbone.conv1.weight, backbone.bn1.weight, backbone.bn1.bias, backbone.bn1.running_mean, backbone.bn1.running_var, backbone.bn1.num_batches_tracked, backbone.layer1.0.conv1.weight, backbone.layer1.0.bn1.weight, backbone.layer1.0.bn1.bias, backbone.layer1.0.bn1.running_mean, backbone.layer1.0.bn1.running_var, backbone.layer1.0.bn1.num_batches_tracked, backbone.layer1.0.conv2.weight, backbone.layer1.0.bn2.weight, backbone.layer1.0.bn2.bias, backbone.layer1.0.bn2.running_mean, backbone.layer1.0.bn2.running_var, backbone.layer1.0.bn2.num_batches_tracked, backbone.layer1.0.conv3.weight, backbone.layer1.0.bn3.weight, backbone.layer1.0.bn3.bias, backbone.layer1.0.bn3.running_mean, backbone.layer1.0.bn3.running_var, backbone.layer1.0.bn3.num_batches_tracked, backbone.layer1.0.downsample.0.weight, backbone.layer1.0.downsample.1.weight, backbone.layer1.0.downsample.1.bias, backbone.layer1.0.downsample.1.running_mean, backbone.layer1.0.downsample.1.running_var, backbone.layer1.0.downsample.1.num_batches_tracked, backbone.layer1.1.conv1.weight, backbone.layer1.1.bn1.weight, backbone.layer1.1.bn1.bias, backbone.layer1.1.bn1.running_mean, backbone.layer1.1.bn1.running_var, backbone.layer1.1.bn1.num_batches_tracked, backbone.layer1.1.conv2.weight, backbone.layer1.1.bn2.weight, backbone.layer1.1.bn2.bias, backbone.layer1.1.bn2.running_mean, backbone.layer1.1.bn2.running_var, backbone.layer1.1.bn2.num_batches_tracked, backbone.layer1.1.conv3.weight, backbone.layer1.1.bn3.weight, backbone.layer1.1.bn3.bias, backbone.layer1.1.bn3.running_mean, backbone.layer1.1.bn3.running_var, backbone.layer1.1.bn3.num_batches_tracked, backbone.layer1.2.conv1.weight, backbone.layer1.2.bn1.weight, backbone.layer1.2.bn1.bias, backbone.layer1.2.bn1.running_mean, backbone.layer1.2.bn1.running_var, backbone.layer1.2.bn1.num_batches_tracked, backbone.layer1.2.conv2.weight, backbone.layer1.2.bn2.weight, backbone.layer1.2.bn2.bias, backbone.layer1.2.bn2.running_mean, backbone.layer1.2.bn2.running_var, backbone.layer1.2.bn2.num_batches_tracked, backbone.layer1.2.conv3.weight, backbone.layer1.2.bn3.weight, backbone.layer1.2.bn3.bias, backbone.layer1.2.bn3.running_mean, backbone.layer1.2.bn3.running_var, backbone.layer1.2.bn3.num_batches_tracked, backbone.layer2.0.conv1.weight, backbone.layer2.0.bn1.weight, backbone.layer2.0.bn1.bias, backbone.layer2.0.bn1.running_mean, backbone.layer2.0.bn1.running_var, backbone.layer2.0.bn1.num_batches_tracked, backbone.layer2.0.conv2.weight, backbone.layer2.0.bn2.weight, backbone.layer2.0.bn2.bias, backbone.layer2.0.bn2.running_mean, backbone.layer2.0.bn2.running_var, backbone.layer2.0.bn2.num_batches_tracked, backbone.layer2.0.conv3.weight, backbone.layer2.0.bn3.weight, backbone.layer2.0.bn3.bias, backbone.layer2.0.bn3.running_mean, backbone.layer2.0.bn3.running_var, backbone.layer2.0.bn3.num_batches_tracked, backbone.layer2.0.downsample.0.weight, backbone.layer2.0.downsample.1.weight, backbone.layer2.0.downsample.1.bias, backbone.layer2.0.downsample.1.running_mean, backbone.layer2.0.downsample.1.running_var, backbone.layer2.0.downsample.1.num_batches_tracked, backbone.layer2.1.conv1.weight, backbone.layer2.1.bn1.weight, backbone.layer2.1.bn1.bias, backbone.layer2.1.bn1.running_mean, backbone.layer2.1.bn1.running_var, backbone.layer2.1.bn1.num_batches_tracked, backbone.layer2.1.conv2.weight, backbone.layer2.1.bn2.weight, backbone.layer2.1.bn2.bias, backbone.layer2.1.bn2.running_mean, backbone.layer2.1.bn2.running_var, backbone.layer2.1.bn2.num_batches_tracked, backbone.layer2.1.conv3.weight, backbone.layer2.1.bn3.weight, backbone.layer2.1.bn3.bias, backbone.layer2.1.bn3.running_mean, backbone.layer2.1.bn3.running_var, backbone.layer2.1.bn3.num_batches_tracked, backbone.layer2.2.conv1.weight, backbone.layer2.2.bn1.weight, backbone.layer2.2.bn1.bias, backbone.layer2.2.bn1.running_mean, backbone.layer2.2.bn1.running_var, backbone.layer2.2.bn1.num_batches_tracked, backbone.layer2.2.conv2.weight, backbone.layer2.2.bn2.weight, backbone.layer2.2.bn2.bias, backbone.layer2.2.bn2.running_mean, backbone.layer2.2.bn2.running_var, backbone.layer2.2.bn2.num_batches_tracked, backbone.layer2.2.conv3.weight, backbone.layer2.2.bn3.weight, backbone.layer2.2.bn3.bias, backbone.layer2.2.bn3.running_mean, backbone.layer2.2.bn3.running_var, backbone.layer2.2.bn3.num_batches_tracked, backbone.layer2.3.conv1.weight, backbone.layer2.3.bn1.weight, backbone.layer2.3.bn1.bias, backbone.layer2.3.bn1.running_mean, backbone.layer2.3.bn1.running_var, backbone.layer2.3.bn1.num_batches_tracked, backbone.layer2.3.conv2.weight, backbone.layer2.3.bn2.weight, backbone.layer2.3.bn2.bias, backbone.layer2.3.bn2.running_mean, backbone.layer2.3.bn2.running_var, backbone.layer2.3.bn2.num_batches_tracked, backbone.layer2.3.conv3.weight, backbone.layer2.3.bn3.weight, backbone.layer2.3.bn3.bias, backbone.layer2.3.bn3.running_mean, backbone.layer2.3.bn3.running_var, backbone.layer2.3.bn3.num_batches_tracked, backbone.layer3.0.conv1.weight, backbone.layer3.0.bn1.weight, backbone.layer3.0.bn1.bias, backbone.layer3.0.bn1.running_mean, backbone.layer3.0.bn1.running_var, backbone.layer3.0.bn1.num_batches_tracked, backbone.layer3.0.conv2.weight, backbone.layer3.0.bn2.weight, backbone.layer3.0.bn2.bias, backbone.layer3.0.bn2.running_mean, backbone.layer3.0.bn2.running_var, backbone.layer3.0.bn2.num_batches_tracked, backbone.layer3.0.conv3.weight, backbone.layer3.0.bn3.weight, backbone.layer3.0.bn3.bias, backbone.layer3.0.bn3.running_mean, backbone.layer3.0.bn3.running_var, backbone.layer3.0.bn3.num_batches_tracked, backbone.layer3.0.downsample.0.weight, backbone.layer3.0.downsample.1.weight, backbone.layer3.0.downsample.1.bias, backbone.layer3.0.downsample.1.running_mean, backbone.layer3.0.downsample.1.running_var, backbone.layer3.0.downsample.1.num_batches_tracked, backbone.layer3.1.conv1.weight, backbone.layer3.1.bn1.weight, backbone.layer3.1.bn1.bias, backbone.layer3.1.bn1.running_mean, backbone.layer3.1.bn1.running_var, backbone.layer3.1.bn1.num_batches_tracked, backbone.layer3.1.conv2.weight, backbone.layer3.1.bn2.weight, backbone.layer3.1.bn2.bias, backbone.layer3.1.bn2.running_mean, backbone.layer3.1.bn2.running_var, backbone.layer3.1.bn2.num_batches_tracked, backbone.layer3.1.conv3.weight, backbone.layer3.1.bn3.weight, backbone.layer3.1.bn3.bias, backbone.layer3.1.bn3.running_mean, backbone.layer3.1.bn3.running_var, backbone.layer3.1.bn3.num_batches_tracked, backbone.layer3.2.conv1.weight, backbone.layer3.2.bn1.weight, backbone.layer3.2.bn1.bias, backbone.layer3.2.bn1.running_mean, backbone.layer3.2.bn1.running_var, backbone.layer3.2.bn1.num_batches_tracked, backbone.layer3.2.conv2.weight, backbone.layer3.2.bn2.weight, backbone.layer3.2.bn2.bias, backbone.layer3.2.bn2.running_mean, backbone.layer3.2.bn2.running_var, backbone.layer3.2.bn2.num_batches_tracked, backbone.layer3.2.conv3.weight, backbone.layer3.2.bn3.weight, backbone.layer3.2.bn3.bias, backbone.layer3.2.bn3.running_mean, backbone.layer3.2.bn3.running_var, backbone.layer3.2.bn3.num_batches_tracked, backbone.layer3.3.conv1.weight, backbone.layer3.3.bn1.weight, backbone.layer3.3.bn1.bias, backbone.layer3.3.bn1.running_mean, backbone.layer3.3.bn1.running_var, backbone.layer3.3.bn1.num_batches_tracked, backbone.layer3.3.conv2.weight, backbone.layer3.3.bn2.weight, backbone.layer3.3.bn2.bias, backbone.layer3.3.bn2.running_mean, backbone.layer3.3.bn2.running_var, backbone.layer3.3.bn2.num_batches_tracked, backbone.layer3.3.conv3.weight, backbone.layer3.3.bn3.weight, backbone.layer3.3.bn3.bias, backbone.layer3.3.bn3.running_mean, backbone.layer3.3.bn3.running_var, backbone.layer3.3.bn3.num_batches_tracked, backbone.layer3.4.conv1.weight, backbone.layer3.4.bn1.weight, backbone.layer3.4.bn1.bias, backbone.layer3.4.bn1.running_mean, backbone.layer3.4.bn1.running_var, backbone.layer3.4.bn1.num_batches_tracked, backbone.layer3.4.conv2.weight, backbone.layer3.4.bn2.weight, backbone.layer3.4.bn2.bias, backbone.layer3.4.bn2.running_mean, backbone.layer3.4.bn2.running_var, backbone.layer3.4.bn2.num_batches_tracked, backbone.layer3.4.conv3.weight, backbone.layer3.4.bn3.weight, backbone.layer3.4.bn3.bias, backbone.layer3.4.bn3.running_mean, backbone.layer3.4.bn3.running_var, backbone.layer3.4.bn3.num_batches_tracked, backbone.layer3.5.conv1.weight, backbone.layer3.5.bn1.weight, backbone.layer3.5.bn1.bias, backbone.layer3.5.bn1.running_mean, backbone.layer3.5.bn1.running_var, backbone.layer3.5.bn1.num_batches_tracked, backbone.layer3.5.conv2.weight, backbone.layer3.5.bn2.weight, backbone.layer3.5.bn2.bias, backbone.layer3.5.bn2.running_mean, backbone.layer3.5.bn2.running_var, backbone.layer3.5.bn2.num_batches_tracked, backbone.layer3.5.conv3.weight, backbone.layer3.5.bn3.weight, backbone.layer3.5.bn3.bias, backbone.layer3.5.bn3.running_mean, backbone.layer3.5.bn3.running_var, backbone.layer3.5.bn3.num_batches_tracked, backbone.layer4.0.conv1.weight, backbone.layer4.0.bn1.weight, backbone.layer4.0.bn1.bias, backbone.layer4.0.bn1.running_mean, backbone.layer4.0.bn1.running_var, backbone.layer4.0.bn1.num_batches_tracked, backbone.layer4.0.conv2.weight, backbone.layer4.0.bn2.weight, backbone.layer4.0.bn2.bias, backbone.layer4.0.bn2.running_mean, backbone.layer4.0.bn2.running_var, backbone.layer4.0.bn2.num_batches_tracked, backbone.layer4.0.conv3.weight, backbone.layer4.0.bn3.weight, backbone.layer4.0.bn3.bias, backbone.layer4.0.bn3.running_mean, backbone.layer4.0.bn3.running_var, backbone.layer4.0.bn3.num_batches_tracked, backbone.layer4.0.downsample.0.weight, backbone.layer4.0.downsample.1.weight, backbone.layer4.0.downsample.1.bias, backbone.layer4.0.downsample.1.running_mean, backbone.layer4.0.downsample.1.running_var, backbone.layer4.0.downsample.1.num_batches_tracked, backbone.layer4.1.conv1.weight, backbone.layer4.1.bn1.weight, backbone.layer4.1.bn1.bias, backbone.layer4.1.bn1.running_mean, backbone.layer4.1.bn1.running_var, backbone.layer4.1.bn1.num_batches_tracked, backbone.layer4.1.conv2.weight, backbone.layer4.1.bn2.weight, backbone.layer4.1.bn2.bias, backbone.layer4.1.bn2.running_mean, backbone.layer4.1.bn2.running_var, backbone.layer4.1.bn2.num_batches_tracked, backbone.layer4.1.conv3.weight, backbone.layer4.1.bn3.weight, backbone.layer4.1.bn3.bias, backbone.layer4.1.bn3.running_mean, backbone.layer4.1.bn3.running_var, backbone.layer4.1.bn3.num_batches_tracked, backbone.layer4.2.conv1.weight, backbone.layer4.2.bn1.weight, backbone.layer4.2.bn1.bias, backbone.layer4.2.bn1.running_mean, backbone.layer4.2.bn1.running_var, backbone.layer4.2.bn1.num_batches_tracked, backbone.layer4.2.conv2.weight, backbone.layer4.2.bn2.weight, backbone.layer4.2.bn2.bias, backbone.layer4.2.bn2.running_mean, backbone.layer4.2.bn2.running_var, backbone.layer4.2.bn2.num_batches_tracked, backbone.layer4.2.conv3.weight, backbone.layer4.2.bn3.weight, backbone.layer4.2.bn3.bias, backbone.layer4.2.bn3.running_mean, backbone.layer4.2.bn3.running_var, backbone.layer4.2.bn3.num_batches_tracked, neck.lateral_convs.0.conv.weight, neck.lateral_convs.0.conv.bias, neck.lateral_convs.1.conv.weight, neck.lateral_convs.1.conv.bias, neck.lateral_convs.2.conv.weight, neck.lateral_convs.2.conv.bias, neck.lateral_convs.3.conv.weight, neck.lateral_convs.3.conv.bias, neck.fpn_convs.0.conv.weight, neck.fpn_convs.0.conv.bias, neck.fpn_convs.1.conv.weight, neck.fpn_convs.1.conv.bias, neck.fpn_convs.2.conv.weight, neck.fpn_convs.2.conv.bias, neck.fpn_convs.3.conv.weight, neck.fpn_convs.3.conv.bias, rpn_head.rpn_conv.weight, rpn_head.rpn_conv.bias, rpn_head.rpn_cls.weight, rpn_head.rpn_cls.bias, rpn_head.rpn_reg.weight, rpn_head.rpn_reg.bias, roi_head.bbox_head.fc_cls.weight, roi_head.bbox_head.fc_cls.bias, roi_head.bbox_head.fc_reg.weight, roi_head.bbox_head.fc_reg.bias, roi_head.bbox_head.shared_fcs.0.weight, roi_head.bbox_head.shared_fcs.0.bias, roi_head.bbox_head.shared_fcs.1.weight, roi_head.bbox_head.shared_fcs.1.bias, roi_head.track_head.convs.0.conv.weight, roi_head.track_head.convs.0.gn.weight, roi_head.track_head.convs.0.gn.bias, roi_head.track_head.convs.1.conv.weight, roi_head.track_head.convs.1.gn.weight, roi_head.track_head.convs.1.gn.bias, roi_head.track_head.convs.2.conv.weight, roi_head.track_head.convs.2.gn.weight, roi_head.track_head.convs.2.gn.bias, roi_head.track_head.convs.3.conv.weight, roi_head.track_head.convs.3.gn.weight, roi_head.track_head.convs.3.gn.bias, roi_head.track_head.fcs.0.weight, roi_head.track_head.fcs.0.bias, roi_head.track_head.fc_embed.weight, roi_head.track_head.fc_embed.bias
    
    missing keys in source state_dict: detector.backbone.conv1.weight, detector.backbone.bn1.weight, detector.backbone.bn1.bias, detector.backbone.bn1.running_mean, detector.backbone.bn1.running_var, detector.backbone.layer1.0.conv1.weight, detector.backbone.layer1.0.bn1.weight, detector.backbone.layer1.0.bn1.bias, detector.backbone.layer1.0.bn1.running_mean, detector.backbone.layer1.0.bn1.running_var, detector.backbone.layer1.0.conv2.weight, detector.backbone.layer1.0.bn2.weight, detector.backbone.layer1.0.bn2.bias, detector.backbone.layer1.0.bn2.running_mean, detector.backbone.layer1.0.bn2.running_var, detector.backbone.layer1.0.conv3.weight, detector.backbone.layer1.0.bn3.weight, detector.backbone.layer1.0.bn3.bias, detector.backbone.layer1.0.bn3.running_mean, detector.backbone.layer1.0.bn3.running_var, detector.backbone.layer1.0.downsample.0.weight, detector.backbone.layer1.0.downsample.1.weight, detector.backbone.layer1.0.downsample.1.bias, detector.backbone.layer1.0.downsample.1.running_mean, detector.backbone.layer1.0.downsample.1.running_var, detector.backbone.layer1.1.conv1.weight, detector.backbone.layer1.1.bn1.weight, detector.backbone.layer1.1.bn1.bias, detector.backbone.layer1.1.bn1.running_mean, detector.backbone.layer1.1.bn1.running_var, detector.backbone.layer1.1.conv2.weight, detector.backbone.layer1.1.bn2.weight, detector.backbone.layer1.1.bn2.bias, detector.backbone.layer1.1.bn2.running_mean, detector.backbone.layer1.1.bn2.running_var, detector.backbone.layer1.1.conv3.weight, detector.backbone.layer1.1.bn3.weight, detector.backbone.layer1.1.bn3.bias, detector.backbone.layer1.1.bn3.running_mean, detector.backbone.layer1.1.bn3.running_var, detector.backbone.layer1.2.conv1.weight, detector.backbone.layer1.2.bn1.weight, detector.backbone.layer1.2.bn1.bias, detector.backbone.layer1.2.bn1.running_mean, detector.backbone.layer1.2.bn1.running_var, detector.backbone.layer1.2.conv2.weight, detector.backbone.layer1.2.bn2.weight, detector.backbone.layer1.2.bn2.bias, detector.backbone.layer1.2.bn2.running_mean, detector.backbone.layer1.2.bn2.running_var, detector.backbone.layer1.2.conv3.weight, detector.backbone.layer1.2.bn3.weight, detector.backbone.layer1.2.bn3.bias, detector.backbone.layer1.2.bn3.running_mean, detector.backbone.layer1.2.bn3.running_var, detector.backbone.layer2.0.conv1.weight, detector.backbone.layer2.0.bn1.weight, detector.backbone.layer2.0.bn1.bias, detector.backbone.layer2.0.bn1.running_mean, detector.backbone.layer2.0.bn1.running_var, detector.backbone.layer2.0.conv2.weight, detector.backbone.layer2.0.bn2.weight, detector.backbone.layer2.0.bn2.bias, detector.backbone.layer2.0.bn2.running_mean, detector.backbone.layer2.0.bn2.running_var, detector.backbone.layer2.0.conv3.weight, detector.backbone.layer2.0.bn3.weight, detector.backbone.layer2.0.bn3.bias, detector.backbone.layer2.0.bn3.running_mean, detector.backbone.layer2.0.bn3.running_var, detector.backbone.layer2.0.downsample.0.weight, detector.backbone.layer2.0.downsample.1.weight, detector.backbone.layer2.0.downsample.1.bias, detector.backbone.layer2.0.downsample.1.running_mean, detector.backbone.layer2.0.downsample.1.running_var, detector.backbone.layer2.1.conv1.weight, detector.backbone.layer2.1.bn1.weight, detector.backbone.layer2.1.bn1.bias, detector.backbone.layer2.1.bn1.running_mean, detector.backbone.layer2.1.bn1.running_var, detector.backbone.layer2.1.conv2.weight, detector.backbone.layer2.1.bn2.weight, detector.backbone.layer2.1.bn2.bias, detector.backbone.layer2.1.bn2.running_mean, detector.backbone.layer2.1.bn2.running_var, detector.backbone.layer2.1.conv3.weight, detector.backbone.layer2.1.bn3.weight, detector.backbone.layer2.1.bn3.bias, detector.backbone.layer2.1.bn3.running_mean, detector.backbone.layer2.1.bn3.running_var, detector.backbone.layer2.2.conv1.weight, detector.backbone.layer2.2.bn1.weight, detector.backbone.layer2.2.bn1.bias, detector.backbone.layer2.2.bn1.running_mean, detector.backbone.layer2.2.bn1.running_var, detector.backbone.layer2.2.conv2.weight, detector.backbone.layer2.2.bn2.weight, detector.backbone.layer2.2.bn2.bias, detector.backbone.layer2.2.bn2.running_mean, detector.backbone.layer2.2.bn2.running_var, detector.backbone.layer2.2.conv3.weight, detector.backbone.layer2.2.bn3.weight, detector.backbone.layer2.2.bn3.bias, detector.backbone.layer2.2.bn3.running_mean, detector.backbone.layer2.2.bn3.running_var, detector.backbone.layer2.3.conv1.weight, detector.backbone.layer2.3.bn1.weight, detector.backbone.layer2.3.bn1.bias, detector.backbone.layer2.3.bn1.running_mean, detector.backbone.layer2.3.bn1.running_var, detector.backbone.layer2.3.conv2.weight, detector.backbone.layer2.3.bn2.weight, detector.backbone.layer2.3.bn2.bias, detector.backbone.layer2.3.bn2.running_mean, detector.backbone.layer2.3.bn2.running_var, detector.backbone.layer2.3.conv3.weight, detector.backbone.layer2.3.bn3.weight, detector.backbone.layer2.3.bn3.bias, detector.backbone.layer2.3.bn3.running_mean, detector.backbone.layer2.3.bn3.running_var, detector.backbone.layer3.0.conv1.weight, detector.backbone.layer3.0.bn1.weight, detector.backbone.layer3.0.bn1.bias, detector.backbone.layer3.0.bn1.running_mean, detector.backbone.layer3.0.bn1.running_var, detector.backbone.layer3.0.conv2.weight, detector.backbone.layer3.0.bn2.weight, detector.backbone.layer3.0.bn2.bias, detector.backbone.layer3.0.bn2.running_mean, detector.backbone.layer3.0.bn2.running_var, detector.backbone.layer3.0.conv3.weight, detector.backbone.layer3.0.bn3.weight, detector.backbone.layer3.0.bn3.bias, detector.backbone.layer3.0.bn3.running_mean, detector.backbone.layer3.0.bn3.running_var, detector.backbone.layer3.0.downsample.0.weight, detector.backbone.layer3.0.downsample.1.weight, detector.backbone.layer3.0.downsample.1.bias, detector.backbone.layer3.0.downsample.1.running_mean, detector.backbone.layer3.0.downsample.1.running_var, detector.backbone.layer3.1.conv1.weight, detector.backbone.layer3.1.bn1.weight, detector.backbone.layer3.1.bn1.bias, detector.backbone.layer3.1.bn1.running_mean, detector.backbone.layer3.1.bn1.running_var, detector.backbone.layer3.1.conv2.weight, detector.backbone.layer3.1.bn2.weight, detector.backbone.layer3.1.bn2.bias, detector.backbone.layer3.1.bn2.running_mean, detector.backbone.layer3.1.bn2.running_var, detector.backbone.layer3.1.conv3.weight, detector.backbone.layer3.1.bn3.weight, detector.backbone.layer3.1.bn3.bias, detector.backbone.layer3.1.bn3.running_mean, detector.backbone.layer3.1.bn3.running_var, detector.backbone.layer3.2.conv1.weight, detector.backbone.layer3.2.bn1.weight, detector.backbone.layer3.2.bn1.bias, detector.backbone.layer3.2.bn1.running_mean, detector.backbone.layer3.2.bn1.running_var, detector.backbone.layer3.2.conv2.weight, detector.backbone.layer3.2.bn2.weight, detector.backbone.layer3.2.bn2.bias, detector.backbone.layer3.2.bn2.running_mean, detector.backbone.layer3.2.bn2.running_var, detector.backbone.layer3.2.conv3.weight, detector.backbone.layer3.2.bn3.weight, detector.backbone.layer3.2.bn3.bias, detector.backbone.layer3.2.bn3.running_mean, detector.backbone.layer3.2.bn3.running_var, detector.backbone.layer3.3.conv1.weight, detector.backbone.layer3.3.bn1.weight, detector.backbone.layer3.3.bn1.bias, detector.backbone.layer3.3.bn1.running_mean, detector.backbone.layer3.3.bn1.running_var, detector.backbone.layer3.3.conv2.weight, detector.backbone.layer3.3.bn2.weight, detector.backbone.layer3.3.bn2.bias, detector.backbone.layer3.3.bn2.running_mean, detector.backbone.layer3.3.bn2.running_var, detector.backbone.layer3.3.conv3.weight, detector.backbone.layer3.3.bn3.weight, detector.backbone.layer3.3.bn3.bias, detector.backbone.layer3.3.bn3.running_mean, detector.backbone.layer3.3.bn3.running_var, detector.backbone.layer3.4.conv1.weight, detector.backbone.layer3.4.bn1.weight, detector.backbone.layer3.4.bn1.bias, detector.backbone.layer3.4.bn1.running_mean, detector.backbone.layer3.4.bn1.running_var, detector.backbone.layer3.4.conv2.weight, detector.backbone.layer3.4.bn2.weight, detector.backbone.layer3.4.bn2.bias, detector.backbone.layer3.4.bn2.running_mean, detector.backbone.layer3.4.bn2.running_var, detector.backbone.layer3.4.conv3.weight, detector.backbone.layer3.4.bn3.weight, detector.backbone.layer3.4.bn3.bias, detector.backbone.layer3.4.bn3.running_mean, detector.backbone.layer3.4.bn3.running_var, detector.backbone.layer3.5.conv1.weight, detector.backbone.layer3.5.bn1.weight, detector.backbone.layer3.5.bn1.bias, detector.backbone.layer3.5.bn1.running_mean, detector.backbone.layer3.5.bn1.running_var, detector.backbone.layer3.5.conv2.weight, detector.backbone.layer3.5.bn2.weight, detector.backbone.layer3.5.bn2.bias, detector.backbone.layer3.5.bn2.running_mean, detector.backbone.layer3.5.bn2.running_var, detector.backbone.layer3.5.conv3.weight, detector.backbone.layer3.5.bn3.weight, detector.backbone.layer3.5.bn3.bias, detector.backbone.layer3.5.bn3.running_mean, detector.backbone.layer3.5.bn3.running_var, detector.backbone.layer4.0.conv1.weight, detector.backbone.layer4.0.bn1.weight, detector.backbone.layer4.0.bn1.bias, detector.backbone.layer4.0.bn1.running_mean, detector.backbone.layer4.0.bn1.running_var, detector.backbone.layer4.0.conv2.weight, detector.backbone.layer4.0.bn2.weight, detector.backbone.layer4.0.bn2.bias, detector.backbone.layer4.0.bn2.running_mean, detector.backbone.layer4.0.bn2.running_var, detector.backbone.layer4.0.conv3.weight, detector.backbone.layer4.0.bn3.weight, detector.backbone.layer4.0.bn3.bias, detector.backbone.layer4.0.bn3.running_mean, detector.backbone.layer4.0.bn3.running_var, detector.backbone.layer4.0.downsample.0.weight, detector.backbone.layer4.0.downsample.1.weight, detector.backbone.layer4.0.downsample.1.bias, detector.backbone.layer4.0.downsample.1.running_mean, detector.backbone.layer4.0.downsample.1.running_var, detector.backbone.layer4.1.conv1.weight, detector.backbone.layer4.1.bn1.weight, detector.backbone.layer4.1.bn1.bias, detector.backbone.layer4.1.bn1.running_mean, detector.backbone.layer4.1.bn1.running_var, detector.backbone.layer4.1.conv2.weight, detector.backbone.layer4.1.bn2.weight, detector.backbone.layer4.1.bn2.bias, detector.backbone.layer4.1.bn2.running_mean, detector.backbone.layer4.1.bn2.running_var, detector.backbone.layer4.1.conv3.weight, detector.backbone.layer4.1.bn3.weight, detector.backbone.layer4.1.bn3.bias, detector.backbone.layer4.1.bn3.running_mean, detector.backbone.layer4.1.bn3.running_var, detector.backbone.layer4.2.conv1.weight, detector.backbone.layer4.2.bn1.weight, detector.backbone.layer4.2.bn1.bias, detector.backbone.layer4.2.bn1.running_mean, detector.backbone.layer4.2.bn1.running_var, detector.backbone.layer4.2.conv2.weight, detector.backbone.layer4.2.bn2.weight, detector.backbone.layer4.2.bn2.bias, detector.backbone.layer4.2.bn2.running_mean, detector.backbone.layer4.2.bn2.running_var, detector.backbone.layer4.2.conv3.weight, detector.backbone.layer4.2.bn3.weight, detector.backbone.layer4.2.bn3.bias, detector.backbone.layer4.2.bn3.running_mean, detector.backbone.layer4.2.bn3.running_var, detector.neck.lateral_convs.0.conv.weight, detector.neck.lateral_convs.0.conv.bias, detector.neck.lateral_convs.1.conv.weight, detector.neck.lateral_convs.1.conv.bias, detector.neck.lateral_convs.2.conv.weight, detector.neck.lateral_convs.2.conv.bias, detector.neck.lateral_convs.3.conv.weight, detector.neck.lateral_convs.3.conv.bias, detector.neck.fpn_convs.0.conv.weight, detector.neck.fpn_convs.0.conv.bias, detector.neck.fpn_convs.1.conv.weight, detector.neck.fpn_convs.1.conv.bias, detector.neck.fpn_convs.2.conv.weight, detector.neck.fpn_convs.2.conv.bias, detector.neck.fpn_convs.3.conv.weight, detector.neck.fpn_convs.3.conv.bias, detector.rpn_head.rpn_conv.weight, detector.rpn_head.rpn_conv.bias, detector.rpn_head.rpn_cls.weight, detector.rpn_head.rpn_cls.bias, detector.rpn_head.rpn_reg.weight, detector.rpn_head.rpn_reg.bias, detector.roi_head.bbox_head.fc_cls.weight, detector.roi_head.bbox_head.fc_cls.bias, detector.roi_head.bbox_head.fc_reg.weight, detector.roi_head.bbox_head.fc_reg.bias, detector.roi_head.bbox_head.shared_fcs.0.weight, detector.roi_head.bbox_head.shared_fcs.0.bias, detector.roi_head.bbox_head.shared_fcs.1.weight, detector.roi_head.bbox_head.shared_fcs.1.bias, track_head.track_head.convs.0.conv.weight, track_head.track_head.convs.0.gn.weight, track_head.track_head.convs.0.gn.bias, track_head.track_head.convs.1.conv.weight, track_head.track_head.convs.1.gn.weight, track_head.track_head.convs.1.gn.bias, track_head.track_head.convs.2.conv.weight, track_head.track_head.convs.2.gn.weight, track_head.track_head.convs.2.gn.bias, track_head.track_head.convs.3.conv.weight, track_head.track_head.convs.3.gn.weight, track_head.track_head.convs.3.gn.bias, track_head.track_head.fcs.0.weight, track_head.track_head.fcs.0.bias, track_head.track_head.fc_embed.weight, track_head.track_head.fc_embed.bias
    
    opened by yimingzhou1 1
  • BDD100k det conversion error

    BDD100k det conversion error

    When I try to run this command: python -m bdd100k.label.to_coco -m det -i bdd100k/labels/det_20/det_train.json -o data/bdd/labels/det_20/det_train_cocofmt.json I receive the following error:

    [2022-09-23 16:25:55,619 to_coco.py:301 main] Mode: det remove-ignore: False ignore-as-class: False [2022-09-23 16:25:55,619 to_coco.py:307 main] Loading annotations... [2022-09-23 16:26:02,429 to_coco.py:318 main] Converting annotations... 10%|████████ | 6879/69863 [00:00<00:08, 7435.14it/s] Traceback (most recent call last): File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 337, in main() File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 322, in main coco = bdd100k2coco_det( File "/u/m/c/mcdougall/miniconda3/envs/torch310/lib/python3.10/site-packages/bdd100k-1.0.0-py3.10.egg/bdd100k/label/to_coco.py", line 145, in bdd100k2coco_det if frame["labels"]: KeyError: 'labels'

    This error does not occur when running with ${SET_NAME} equal to val

    opened by IMcDougall 0
  • The reference image and key image are exactly the same

    The reference image and key image are exactly the same

    In the article (QDTrack), the difference between the key image and the reference image is indicated by the image below.

    Screenshot 2022-09-05 113254

    However, when debugging the training code, I saw that the reference image metadata and key image metadata returned by the data loader are exactly the same.

    Screenshot 2022-09-05 103058

    Do I need to change a parameter before starting training or is this an error in the code? I would be glad if you inform me.

    opened by Hcayirli 4
Releases(v0.1)
Owner
ETH VIS Research Group
Visual Intelligence and Systems Group at ETH Zürich
ETH VIS Research Group
Deploy pytorch classification model using Flask and Streamlit

Deploy pytorch classification model using Flask and Streamlit

Ben Seo 1 Nov 17, 2021
Draw like Bob Ross using the power of Neural Networks (With PyTorch)!

Draw like Bob Ross using the power of Neural Networks! (+ Pytorch) Learning Process Visualization Getting started Install dependecies Requires python3

Kendrick Tan 116 Mar 07, 2022
performing moving objects segmentation using image processing techniques with opencv and numpy

Moving Objects Segmentation On this project I tried to perform moving objects segmentation using background subtraction technique. the introduced meth

Mohamed Magdy 15 Dec 12, 2022
Orbivator AI - To Determine which features of data (measurements) are most important for diagnosing breast cancer and find out if breast cancer occurs or not.

Orbivator_AI Breast Cancer Wisconsin (Diagnostic) GOAL To Determine which features of data (measurements) are most important for diagnosing breast can

anurag kumar singh 1 Jan 02, 2022
EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

Ruiqi Zhong 42 Nov 03, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Semi-SDP Semi-supervised parser for semantic dependency parsing.

Semi-SDP Semi-supervised parser for semantic dependency parsing. This repo contains the code used for the semi-supervised semantic dependency parser i

12 Sep 17, 2021
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
HuSpaCy: industrial-strength Hungarian natural language processing

HuSpaCy: Industrial-strength Hungarian NLP HuSpaCy is a spaCy model and a library providing industrial-strength Hungarian language processing faciliti

HuSpaCy 120 Dec 14, 2022
Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

The Hypersim Dataset For many fundamental scene understanding tasks, it is difficult or impossible to obtain per-pixel ground truth labels from real i

Apple 1.3k Jan 04, 2023
SciPy fixes and extensions

scipyx SciPy is large library used everywhere in scientific computing. That's why breaking backwards-compatibility comes as a significant cost and is

Nico Schlömer 16 Jul 17, 2022
Job Assignment System by Real-time Emotion Detection

Emotion-Detection Job Assignment System by Real-time Emotion Detection Emotion is the essential role of facial expression and it could provide a lot o

1 Feb 08, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
Aydin is a user-friendly, feature-rich, and fast image denoising tool

Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms.

Royer Lab 99 Dec 14, 2022
Single object tracking and segmentation.

Single/Multiple Object Tracking and Segmentation Codes and comparison of recent single/multiple object tracking and segmentation. News 💥 AutoMatch is

ZP ZHANG 385 Jan 02, 2023
TFOD-MASKRCNN - Tensorflow MaskRCNN With Python

Tensorflow- MaskRCNN Steps git clone https://github.com/amalaj7/TFOD-MASKRCNN.gi

Amal Ajay 2 Jan 18, 2022
Progressive Domain Adaptation for Object Detection

Progressive Domain Adaptation for Object Detection Implementation of our paper Progressive Domain Adaptation for Object Detection, based on pytorch-fa

96 Nov 25, 2022
Keras-1D-ACGAN-Data-Augmentation

Keras-1D-ACGAN-Data-Augmentation What is the ACGAN(Auxiliary Classifier GANs) ? Related Paper : [Abstract : Synthesizing high resolution photorealisti

Jae-Hoon Shim 7 Dec 23, 2022