A general python framework for visual object tracking and video object segmentation, based on PyTorch

Overview

PyTracking

A general python framework for visual object tracking and video object segmentation, based on PyTorch.

📣 Two tracking/VOS papers accepted at ICCV 2021! 👇

Highlights

KeepTrack, LWL, KYS, PrDiMP, DiMP and ATOM Trackers

Official implementation of the KeepTrack (ICCV 2021), LWL (ECCV 2020), KYS (ECCV 2020), PrDiMP (CVPR 2020), DiMP (ICCV 2019), and ATOM (CVPR 2019) trackers, including complete training code and trained models.

Tracking Libraries

Libraries for implementing and evaluating visual trackers. It includes

  • All common tracking and video object segmentation datasets.
  • Scripts to analyse tracker performance and obtain standard performance scores.
  • General building blocks, including deep networks, optimization, feature extraction and utilities for correlation filter tracking.

Training Framework: LTR

LTR (Learning Tracking Representations) is a general framework for training your visual tracking networks. It is equipped with

  • All common training datasets for visual object tracking and segmentation.
  • Functions for data sampling, processing etc.
  • Network modules for visual tracking.
  • And much more...

Model Zoo

The tracker models trained using PyTracking, along with their results on standard tracking benchmarks are provided in the model zoo.

Trackers

The toolkit contains the implementation of the following trackers.

KeepTrack (ICCV 2021)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of KeepTrack. KeepTrack actively handles distractor objects to continue tracking the target. It employs a learned target candidate association network, that allows to propagate the identities of all target candidates from frame-to-frame. To tackle the problem of lacking groundtruth correspondences between distractor objects in visual tracking, it uses a training strategy that combines partial annotations with self-supervision.

KeepTrack_teaser_figure

LWL (ECCV 2020)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of the LWL tracker. LWL is an end-to-end trainable video object segmentation architecture which captures the current target object information in a compact parametric model. It integrates a differentiable few-shot learner module, which predicts the target model parameters using the first frame annotation. The learner is designed to explicitly optimize an error between target model prediction and a ground truth label. LWL further learns the ground-truth labels used by the few-shot learner to train the target model. All modules in the architecture are trained end-to-end by maximizing segmentation accuracy on annotated VOS videos.

LWL overview figure

KYS (ECCV 2020)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of the KYS tracker. Unlike conventional frame-by-frame detection based tracking, KYS propagates valuable scene information through the sequence. This information is used to achieve an improved scene-aware target prediction in each frame. The scene information is represented using a dense set of localized state vectors. These state vectors are propagated through the sequence and combined with the appearance model output to localize the target. The network is learned to effectively utilize the scene information by directly maximizing tracking performance on video segments KYS overview figure

PrDiMP (CVPR 2020)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of the PrDiMP tracker. This work proposes a general formulation for probabilistic regression, which is then applied to visual tracking in the DiMP framework. The network predicts the conditional probability density of the target state given an input image. The probability density is flexibly parametrized by the neural network itself. The regression network is trained by directly minimizing the Kullback-Leibler divergence.

DiMP (ICCV 2019)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of the DiMP tracker. DiMP is an end-to-end tracking architecture, capable of fully exploiting both target and background appearance information for target model prediction. It is based on a target model prediction network, which is derived from a discriminative learning loss by applying an iterative optimization procedure. The model prediction network employs a steepest descent based methodology that computes an optimal step length in each iteration to provide fast convergence. The model predictor also includes an initializer network that efficiently provides an initial estimate of the model weights.

DiMP overview figure

ATOM (CVPR 2019)

[Paper] [Raw results] [Models] [Training Code] [Tracker Code]

Official implementation of the ATOM tracker. ATOM is based on (i) a target estimation module that is trained offline, and (ii) target classification module that is trained online. The target estimation module is trained to predict the intersection-over-union (IoU) overlap between the target and a bounding box estimate. The target classification module is learned online using dedicated optimization techniques to discriminate between the target object and background.

ATOM overview figure

ECO/UPDT (CVPR 2017/ECCV 2018)

[Paper] [Models] [Tracker Code]

An unofficial implementation of the ECO tracker. It is implemented based on an extensive and general library for complex operations and Fourier tools. The implementation differs from the version used in the original paper in a few important aspects.

  1. This implementation uses features from vgg-m layer 1 and resnet18 residual block 3.
  2. As in our later UPDT tracker, seperate filters are trained for shallow and deep features, and extensive data augmentation is employed in the first frame.
  3. The GMM memory module is not implemented, instead the raw projected samples are stored.

Please refer to the official implementation of ECO if you are looking to reproduce the results in the ECO paper or download the raw results.

Installation

Clone the GIT repository.

git clone https://github.com/visionml/pytracking.git

Clone the submodules.

In the repository directory, run the commands:

git submodule update --init  

Install dependencies

Run the installation script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment (here pytracking).

bash install.sh conda_install_path pytracking

This script will also download the default networks and set-up the environment.

Note: The install script has been tested on an Ubuntu 18.04 system. In case of issues, check the detailed installation instructions.

Windows: (NOT Recommended!) Check these installation instructions.

Let's test it!

Activate the conda environment and run the script pytracking/run_webcam.py to run ATOM using the webcam input.

conda activate pytracking
cd pytracking
python run_webcam.py dimp dimp50    

What's next?

pytracking - for implementing your tracker

ltr - for training your tracker

Contributors

Main Contributors

Guest Contributors

Acknowledgments

Comments
  • no checkpoint file

    no checkpoint file

    Hello,I run the command "python run_webcam.py atom default".But shows: Traceback (most recent call last): File "run_webcam.py", line 35, in main() File "run_webcam.py", line 31, in main run_webcam(args.tracker_name, args.tracker_param, args.debug) File "run_webcam.py", line 20, in run_webcam tracker.run_webcam(debug) File "../pytracking/evaluation/tracker.py", line 94, in run_webcam tracker.track_webcam() File "../pytracking/tracker/base/basetracker.py", line 179, in track_webcam self.initialize_features() File "../pytracking/tracker/atom/atom.py", line 19, in initialize_features self.params.features.initialize() File "../pytracking/features/extractor.py", line 16, in initialize f.initialize() File "../pytracking/features/deep.py", line 95, in initialize self.net, _ = load_network(net_path_full, backbone_pretrained=False) File "../ltr/admin/loading.py", line 36, in load_network raise Exception('No matching checkpoint file found') Exception: No matching checkpoint file found

    I check the model file in /networks, it is there. and the local.py shows that "settings.network_path = '/home/bill/Documents/pytracking/pytracking/networks/'",however the probelm still happen.

    opened by universefall 22
  • Performance gap on OTB2015

    Performance gap on OTB2015

    Thank you for your excellent work and sharing! I trained the atom using the source code, and all configurations and employed training sets are same as those in the source code. Then, use the last epoch to test on OTB2015. However, compared the released model whose AUC on OTB2015 is 0.678, I only get 0.657 AUC with the model trained by myself.

    opened by noUmbrella 20
  • Trouble about VOT Integration

    Trouble about VOT Integration

    Thanks for you great work.I have installed the vot-python-toolkit and set up the workspace,but when I typevot evaluate --workspace <workspace-path> DiMP,the error occurs.In log file,it says

    Traceback (most recent call last):
      File "<string>", line 1, in <module>
    NameError: name 'run_vot' is not defined
    Process not alive anymore, unable to retrieve return code.
    

    And in terminal,it saysTracker DiMP encountered an error: Unable to connect to tracker And when I run run_vot.py,it froze athandle = vot.VOT("polygon")in evaluation/tracker.py while gpu is occupied.Can you tell me what's happening?thanks a lot.

    opened by sherlockedlee 19
  • KeyError: 'Nf1aqv5Fg5o_0'

    KeyError: 'Nf1aqv5Fg5o_0'

    Hi,thanks your excellent work. When I train ATOM using the setting 'atom_paper.py',it uses the dataset TrackingNet,then shows the KeyError,but the dataset is correct.Can you tell me why?Look foward your reply! Best regards!

    opened by meimeixu520 14
  • The code for :Know Your Surroundings

    The code for :Know Your Surroundings

    Thank you for your share. It is a good job! And I wonder weather the code for "Know Your Surroundings: Exploiting Scene Information for Object Tracking" will alse be released in this project? As I notice that it is mentioned in the paper you will release the code upon publication and it was accepted by CVPR2020 now. Waiting for your reply and thank you again. Best wishes!

    opened by xxAna 13
  • CPU utils occupies a lot when inference

    CPU utils occupies a lot when inference

    Hi, when inferencing, I notice that the program occupies a lot of CPU load.

    Before running the program: ori

    After running the program: set the thread as 1 1

    set the thread as 30 image

    It seems that there is no improvement.

    Is there any way to reduce CPU utils?

    opened by sfchen94 12
  • PrPooling compile error appears when evaluate on vot2018 in matlab

    PrPooling compile error appears when evaluate on vot2018 in matlab

    Hi: When I evaluated on vot2018, the PrPooling compile error appeared. But the error did not arise on other datasets, e.g. OTB100, what's the matter leading to Error? Envirionment: ubuntu18.04, matlab2018a; Thanks!

    opened by maliangzhibi 11
  • running tracking after detection

    running tracking after detection

    Hi, I would like to know if it is possible to use DiMP and ATOM with detections already provided by my trained detector. I have simply trained a maskrcnn on my object (grape bunches, so it is not in the common datasets used). Then I decompose a video representing a vineyard row in multiple frame and I generate the detections for each frame, saving them to a json file. Is it possible to use my already generated detections in these 2 tracking methods without doing any other sort of training? Sorry for the probably stupid question but I am new in this field and I am searching for a quick solution for my project.

    Another question that I have is if you can tell me what are the offline tracking methods you have in the repo since I do not require a real-time/online method

    opened by andreaceruti 8
  • run run_webcam.py error

    run run_webcam.py error

    File "..\ltr\external\PreciseRoIPooling\pytorch\prroi_pool\prroi_pool.py", line 28, in forward return prroi_pool2d(features, rois, self.pooled_height, self.pooled_width, self.spatial_scale) File "..\ltr\external\PreciseRoIPooling\pytorch\prroi_pool\functional.py", line 50, in forward _prroi_pooling = _import_prroi_pooling() File "..\ltr\external\PreciseRoIPooling\pytorch\prroi_pool\functional.py", line 28, in _import_prroi_pooling _prroi_pooling = imp.load_module('prroi_pool', file, path, description) File "D:\DevelopTools\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "D:\DevelopTools\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) File "", line 696, in _load File "", line 670, in _load_unlocked File "", line 583, in module_from_spec File "", line 1043, in create_module File "", line 219, in _call_with_frames_removed ImportError: DLL load failed: Cannot find the specified module。

    opened by guangzou 8
  • The model file cannot be downloaded normally

    The model file cannot be downloaded normally

    When I used install.sh, I found that the two model files default.pth could not be downloaded. Then I copied the link to the Google browser for download, but the downloaded files were indeed unusable.So could you please upload these two model files to github? Thank you very much #

    opened by Alexadlu 8
  • PrDiMP training

    PrDiMP training "RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR"

    Hi, I'm new to deep learning and pytorch. I'm trying to run PrDiMP training with resnet18 backbone. My machine is CentOS 7, with cuda 10.2, cudnn 7.6.5, and gcc version 7.3.1. I'm only training on the Got10k dataset, and ltr/admin/local.py and ltr/train_settings/dimp/prdimp18.py are modified to fit my dataset.

    I ran the install.sh script to install environment, except with ninja-build mannully installed because CentOS does not use apt-get for installing libraries.

    The environment works for the PrDiMP tracking task (testing on Got10k as well). However, when running training, after 204 batches, the program broke with

    [train: 1, 200 / 2600] FPS: 17.0 (46.3)  ,  Loss/total: 7.24056  ,  Loss/bb_ce: 4.56396  ,  ClfTrain/clf_ce: 4.41025
    [train: 1, 201 / 2600] FPS: 17.1 (47.1)  ,  Loss/total: 7.23772  ,  Loss/bb_ce: 4.56450  ,  ClfTrain/clf_ce: 4.40706
    [train: 1, 202 / 2600] FPS: 16.9 (5.0)  ,  Loss/total: 7.23695  ,  Loss/bb_ce: 4.56430  ,  ClfTrain/clf_ce: 4.40547
    [train: 1, 203 / 2600] FPS: 16.8 (7.6)  ,  Loss/total: 7.23343  ,  Loss/bb_ce: 4.56405  ,  ClfTrain/clf_ce: 4.40103
    [train: 1, 204 / 2600] FPS: 16.8 (47.0)  ,  Loss/total: 7.23036  ,  Loss/bb_ce: 4.56352  ,  ClfTrain/clf_ce: 4.39753
    Training crashed at epoch 1
    Traceback for the error!
    Traceback (most recent call last):
      File "../ltr/trainers/base_trainer.py", line 70, in train
        self.train_epoch()
      File "../ltr/trainers/ltr_trainer.py", line 80, in train_epoch
        self.cycle_dataset(loader)
      File "../ltr/trainers/ltr_trainer.py", line 61, in cycle_dataset
        loss, stats = self.actor(data)
      File "../ltr/actors/tracking.py", line 95, in __call__
        test_proposals=data['test_proposals'])
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "../ltr/models/tracking/dimpnet.py", line 66, in forward
        iou_pred = self.bb_regressor(train_feat_iou, test_feat_iou, train_bb, test_proposals)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "../ltr/models/bbreg/atom_iou_net.py", line 86, in forward
        modulation = self.get_modulation(feat1, bb1)
      File "../ltr/models/bbreg/atom_iou_net.py", line 162, in get_modulation
        fc3_r = self.fc3_1r(roi3r)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
        input = module(input)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
        return self.conv2d_forward(input, self.weight)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
        self.padding, self.dilation, self.groups)
    RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
    

    and

    Restarting training from last epoch ...
    No matching checkpoint file found
    Training crashed at epoch 1
    Traceback for the error!
    Traceback (most recent call last):
      File "../ltr/trainers/base_trainer.py", line 70, in train
        self.train_epoch()
      File "../ltr/trainers/ltr_trainer.py", line 80, in train_epoch
        self.cycle_dataset(loader)
      File "../ltr/trainers/ltr_trainer.py", line 55, in cycle_dataset
        data = data.to(self.device)
      File "../pytracking/libs/tensordict.py", line 24, in apply_attr
        return TensorDict({n: getattr(e, name)(*args, **kwargs) if hasattr(e, name) else e for n, e in self.items()})
      File "../pytracking/libs/tensordict.py", line 24, in <dictcomp>
        return TensorDict({n: getattr(e, name)(*args, **kwargs) if hasattr(e, name) else e for n, e in self.items()})
    RuntimeError: CUDA error: an illegal memory access was encountered
    

    Plus, when the program started, there was a warning about the C++ version.

    No matching checkpoint file found
    Using /tmp/torch_extensions as PyTorch extensions root...
    /home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/utils/cpp_extension.py:191: UserWarning:
    
                                   !! WARNING !!
    
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    Your compiler (c++) is not compatible with the compiler Pytorch was
    built with for this platform, which is g++ on linux. Please
    use g++ to to compile your extension. Alternatively, you may
    compile PyTorch from source using c++, and then you can also use
    c++ to compile your extension.
    
    See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help
    with compiling PyTorch from source.
    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    
                                  !! WARNING !!
    
      platform=sys.platform))
    Detected CUDA files, patching ldflags
    Emitting ninja build file /tmp/torch_extensions/_prroi_pooling/build.ninja...
    Building extension module _prroi_pooling...
    ninja: no work to do.
    Loading extension module _prroi_pooling...
    [train: 1, 1 / 2600] FPS: 0.6 (0.6)  ,  Loss/total: 6.41806  ,  Loss/bb_ce: 4.67661  ,  ClfTrain/clf_ce: 3.84514
    [train: 1, 2 / 2600] FPS: 1.1 (48.4)  ,  Loss/total: 6.44306  ,  Loss/bb_ce: 4.55203  ,  ClfTrain/clf_ce: 3.85286
    [train: 1, 3 / 2600] FPS: 1.6 (46.3)  ,  Loss/total: 6.29484  ,  Loss/bb_ce: 4.54702  ,  ClfTrain/clf_ce: 3.72006
    

    However, when I run tracking, this C++ warning also exists, but it works anyway.

    Since I followed the install.sh script, the environment automatically set is (running conda list -n pytracking):

    # Name                    Version                   Build  Channel
    _libgcc_mutex             0.1                        main
    absl-py                   0.11.0                   pypi_0    pypi
    blas                      1.0                         mkl
    ca-certificates           2020.10.14                    0
    cachetools                4.1.1                    pypi_0    pypi
    certifi                   2020.11.8        py37h06a4308_0
    cffi                      1.14.4                   pypi_0    pypi
    chardet                   3.0.4                    pypi_0    pypi
    cudatoolkit               10.0.130                      0
    cycler                    0.10.0                   py37_0
    cython                    0.29.21          py37h2531618_0
    dbus                      1.13.18              hb2f20db_0
    decorator                 4.4.2                    pypi_0    pypi
    expat                     2.2.10               he6710b0_2
    filelock                  3.0.12                   pypi_0    pypi
    fontconfig                2.13.0               h9420a91_0
    freetype                  2.10.4               h5ab3b9f_0
    gdown                     3.12.2                   pypi_0    pypi
    glib                      2.66.1               h92f7085_0
    google-auth               1.23.0                   pypi_0    pypi
    google-auth-oauthlib      0.4.2                    pypi_0    pypi
    grpcio                    1.33.2                   pypi_0    pypi
    gst-plugins-base          1.14.0               hbbd80ab_1
    gstreamer                 1.14.0               hb31296c_0
    icu                       58.2                 he6710b0_3
    idna                      2.10                     pypi_0    pypi
    imageio                   2.9.0                    pypi_0    pypi
    importlib-metadata        3.1.1                    pypi_0    pypi
    intel-openmp              2020.2                      254
    jpeg                      9b                   h024ee3a_2
    jpeg4py                   0.1.4                    pypi_0    pypi
    jsonpatch                 1.28                     pypi_0    pypi
    jsonpointer               2.0                      pypi_0    pypi
    kiwisolver                1.3.0            py37h2531618_0
    lcms2                     2.11                 h396b838_0
    ld_impl_linux-64          2.33.1               h53a641e_7
    libedit                   3.1.20191231         h14c3975_1
    libffi                    3.3                  he6710b0_2
    libgcc-ng                 9.1.0                hdf63c60_0
    libpng                    1.6.37               hbc83047_0
    libstdcxx-ng              9.1.0                hdf63c60_0
    libtiff                   4.1.0                h2733197_1
    libuuid                   1.0.3                h1bed415_2
    libxcb                    1.14                 h7b6447c_0
    libxml2                   2.9.10               hb55368b_3
    lvis                      0.5.3                    pypi_0    pypi
    lz4-c                     1.9.2                heb0550a_3
    markdown                  3.3.3                    pypi_0    pypi
    matplotlib                3.3.2                         0
    matplotlib-base           3.3.2            py37h817c723_0
    mkl                       2020.2                      256
    mkl-service               2.3.0            py37he904b0f_0
    mkl_fft                   1.2.0            py37h23d657b_0
    mkl_random                1.1.1            py37h0573a6f_0
    ncurses                   6.2                  he6710b0_1
    networkx                  2.5                      pypi_0    pypi
    ninja                     1.10.2           py37hff7bd54_0
    numpy                     1.19.2           py37h54aff64_0
    numpy-base                1.19.2           py37hfa32c7d_0
    oauthlib                  3.1.0                    pypi_0    pypi
    olefile                   0.46                     py37_0
    opencv-python             4.4.0.46                 pypi_0    pypi
    openssl                   1.1.1h               h7b6447c_0
    pandas                    1.1.3            py37he6710b0_0
    pcre                      8.44                 he6710b0_0
    pillow                    8.0.1            py37he98fc37_0
    pip                       20.3             py37h06a4308_0
    protobuf                  3.14.0                   pypi_0    pypi
    pyasn1                    0.4.8                    pypi_0    pypi
    pyasn1-modules            0.2.8                    pypi_0    pypi
    pycocotools               2.0.2                    pypi_0    pypi
    pycparser                 2.20                     pypi_0    pypi
    pyparsing                 2.4.7                      py_0
    pyqt                      5.9.2            py37h05f1152_2
    pysocks                   1.7.1                    pypi_0    pypi
    python                    3.7.9                h7579374_0
    python-dateutil           2.8.1                      py_0
    pytorch                   1.4.0           py3.7_cuda10.0.130_cudnn7.6.3_0    pytorch
    pytz                      2020.4             pyhd3eb1b0_0
    pywavelets                1.1.1                    pypi_0    pypi
    pyzmq                     20.0.0                   pypi_0    pypi
    qt                        5.9.7                h5867ecd_1
    readline                  8.0                  h7b6447c_0
    requests                  2.25.0                   pypi_0    pypi
    requests-oauthlib         1.3.0                    pypi_0    pypi
    rsa                       4.6                      pypi_0    pypi
    scikit-image              0.17.2                   pypi_0    pypi
    scipy                     1.5.4                    pypi_0    pypi
    setuptools                50.3.1           py37h06a4308_1
    sip                       4.19.8           py37hf484d3e_0
    six                       1.15.0           py37h06a4308_0
    sqlite                    3.33.0               h62c20be_0
    tb-nightly                2.5.0a20201202           pypi_0    pypi
    tensorboard-plugin-wit    1.7.0                    pypi_0    pypi
    tifffile                  2020.11.26               pypi_0    pypi
    tikzplotlib               0.9.6                    pypi_0    pypi
    tk                        8.6.10               hbc83047_0
    torchfile                 0.1.0                    pypi_0    pypi
    torchvision               0.5.0                py37_cu100    pytorch
    tornado                   6.0.4            py37h7b6447c_1
    tqdm                      4.51.0             pyhd3eb1b0_0
    urllib3                   1.26.2                   pypi_0    pypi
    visdom                    0.1.8.9                  pypi_0    pypi
    websocket-client          0.57.0                   pypi_0    pypi
    werkzeug                  1.0.1                    pypi_0    pypi
    wheel                     0.35.1             pyhd3eb1b0_0
    xz                        5.2.5                h7b6447c_0
    zipp                      3.4.0                    pypi_0    pypi
    zlib                      1.2.11               h7b6447c_3
    zstd                      1.4.5                h9ceee32_0
    
    

    I'm not sure if the installed pytorch 1.4.0 and torchvision 0.5.0 are the recommended versions. Or is the pytorch 1.4.0 py3.7_cuda10.0.130_cudnn7.6.3_0 conflict with my cuda 10.2 and cudnn 7.6.5? Any help would be appreciated. Thanks!


    I also tried to reduce batch_size from 26 to 8, and samples_per_epoch from 26000 to 16000. So the total batch number changes from 1000 to 2000. But still, it broke at batch number 204:

    [train: 1, 200 / 2000] FPS: 18.3 (42.1)  ,  Loss/total: 7.37861  ,  Loss/bb_ce: 4.53875  ,  ClfTrain/clf_ce: 4.65750
    [train: 1, 201 / 2000] FPS: 17.9 (3.4)  ,  Loss/total: 7.37244  ,  Loss/bb_ce: 4.53805  ,  ClfTrain/clf_ce: 4.65128
    [train: 1, 202 / 2000] FPS: 17.9 (42.1)  ,  Loss/total: 7.36871  ,  Loss/bb_ce: 4.53778  ,  ClfTrain/clf_ce: 4.64706
    [train: 1, 203 / 2000] FPS: 18.0 (41.8)  ,  Loss/total: 7.36853  ,  Loss/bb_ce: 4.53980  ,  ClfTrain/clf_ce: 4.64609
    [train: 1, 204 / 2000] FPS: 18.0 (41.7)  ,  Loss/total: 7.36477  ,  Loss/bb_ce: 4.54119  ,  ClfTrain/clf_ce: 4.64213
    Training crashed at epoch 1
    Traceback for the error!
    Traceback (most recent call last):
      File "../ltr/trainers/base_trainer.py", line 70, in train
        self.train_epoch()
      File "../ltr/trainers/ltr_trainer.py", line 80, in train_epoch
        self.cycle_dataset(loader)
      File "../ltr/trainers/ltr_trainer.py", line 61, in cycle_dataset
        loss, stats = self.actor(data)
      File "../ltr/actors/tracking.py", line 95, in __call__
        test_proposals=data['test_proposals'])
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "../ltr/models/tracking/dimpnet.py", line 66, in forward
        iou_pred = self.bb_regressor(train_feat_iou, test_feat_iou, train_bb, test_proposals)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "../ltr/models/bbreg/atom_iou_net.py", line 86, in forward
        modulation = self.get_modulation(feat1, bb1)
      File "../ltr/models/bbreg/atom_iou_net.py", line 162, in get_modulation
        fc3_r = self.fc3_1r(roi3r)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
        input = module(input)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
        return self.conv2d_forward(input, self.weight)
      File "/home/xxx/anaconda3/envs/pytracking/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
        self.padding, self.dilation, self.groups)
    RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR
    
    
    opened by DEQDON 7
  • test index error

    test index error

    Traceback (most recent call last): File "run_tracker.py", line 65, in main() File "run_tracker.py", line 61, in main args.threads, {'use_visdom': args.use_visdom, 'server': args.visdom_server, 'port': args.visdom_port}) File "run_tracker.py", line 37, in run_tracker run_dataset(dataset, trackers, debug, threads, visdom_info=visdom_info) File "../pytracking/evaluation/running.py", line 203, in run_dataset pool.starmap(run_sequence, param_list) File "/home/bobo/D/anaconda3/envs/pytracking/lib/python3.7/multiprocessing/pool.py", line 276, in starmap return self._map_async(func, iterable, starmapstar, chunksize).get() File "/home/bobo/D/anaconda3/envs/pytracking/lib/python3.7/multiprocessing/pool.py", line 657, in get raise self._value IndexError: invalid index to scalar variable.

    Thank you very much for your work, how to solve this problem?

    opened by lyt31 1
  • visdom.py reports an error when running the pre trained dimp50

    visdom.py reports an error when running the pre trained dimp50

    The command I run is python pytracking/run_tracker.py dimp dimp50 --dataset_name got10k_val --debug 1 --threads 0 and then rise an error in visdom.py line 332 ValueError: only one element tensors can be converted to Python scalars how can I solve this problem, thanks!

    opened by NKdryer 1
  • tensorlist.py has invalid syntax

    tensorlist.py has invalid syntax

    Hello, when I run the program, it shows that there is syntax error in tensorlist.py. Specifically, in def_ matmul__、 def_ rmatmul__, and def attribute.

    In def_ matmul and def_ rmatmul, ([e1 @ e2 for e1, e2 in zip (self, other)]) shows that @ has syntax error: invalid syntax.

    In def attribute, (self, attr: str, * args) shows that: has invalid syntax.

    I want to know how to solve errors and look forward to your reply.

    opened by bathsheba111 0
  • AttributeError: module 'torch' has no attribute 'rfft'

    AttributeError: module 'torch' has no attribute 'rfft'

    When I update PyTorch to 1.9.0 and eval atom default, it occurs this error: AttributeError: module 'torch' has no attribute 'rfft' in fourier.py, which means function changed its signature using higher version pytorch. So rather than downgrading pytorch version, how to modify the codes to adapt it to more versions of pytorch? Here: https://github.com/visionml/pytracking/blob/47d9c1641eca44137c9c71ed398da91bf301c751/pytracking/libs/fourier.py#L19-L31

    opened by laisimiao 0
  • Waiting for the Open source code

    Waiting for the Open source code

    Hello Contributors,

    Actually I am working on research around this topic and need to showcase the custom dataset output. For that if possible can you please share the date on which Generating Masks from Boxes by Mining Spatio-Temporal Consistencies in Videos open source code will be available here ?

    Thanks In Advance Dhrumil

    opened by Dhrumil-Zion 0
Releases(v1.2)
  • v1.2(Jan 16, 2020)

    Stable release after integrating the DiMP tracker.

    Updates:

    • Integration of the DiMP tracker (ICCV 2019)
    • Visualization with Visdom
    • VOT integration
    • Many new network modules
    • Multi GPU training
    • PyTorch v1.2 support

    Requires PyTorch version 1.2 or newer.

    Implemented trackers: DiMP, ATOM and ECO.

    Source code(tar.gz)
    Source code(zip)
  • v1.1(Sep 1, 2019)

    First stable release of PyTracking before the integration of the DiMP tracker.

    PyTorch version: v1.1.

    Implemented trackers: ATOM and ECO.

    Source code(tar.gz)
    Source code(zip)
  • v1.0(May 5, 2019)

Source code for our paper "Empathetic Response Generation with State Management"

Source code for our paper "Empathetic Response Generation with State Management" this repository is maintained by both Jun Gao and Yuhan Liu Model Ove

Yuhan Liu 3 Oct 08, 2022
Machine Learning with JAX Tutorials

The purpose of this repo is to make it easy to get started with JAX. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I fou

Aleksa Gordić 372 Dec 28, 2022
LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant Self-At

OxCSML (Oxford Computational Statistics and Machine Learning) 50 Dec 28, 2022
Classify bird species based on their songs using SIamese Networks and 1D dilated convolutions.

The goal is to classify different birds species based on their songs/calls. Spectrograms have been extracted from the audio samples and used as features for classification.

Aditya Dutt 9 Dec 27, 2022
Notebooks em Python para Métodos Eletromagnéticos

GeoSci Labs This is a repository of code used to power the notebooks and interactive examples for https://em.geosci.xyz and https://gpg.geosci.xyz. Th

Victor Cezar Tocantins 1 Nov 16, 2021
Learning Representations that Support Robust Transfer of Predictors

Transfer Risk Minimization (TRM) Code for Learning Representations that Support Robust Transfer of Predictors Prepare the Datasets Preprocess the Scen

Yilun Xu 15 Dec 07, 2022
All materials of Cassandra Event, Udyam'22

Cassandra 2022 Workspace Workshop Materials Workshop-1 Workshop-2 Workshop-3 Workshop-4 Assignments Assignment-1 Assignment-2 Assignment-3 Resources P

36 Dec 31, 2022
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

6.5k Jan 01, 2023
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks

What is DeepHyper? DeepHyper is a software package that uses learning, optimization, and parallel computing to automate the design and development of

DeepHyper Team 214 Jan 08, 2023
This repo contains implementation of different architectures for emotion recognition in conversations.

Emotion Recognition in Conversations Updates 🔥 🔥 🔥 Date Announcements 03/08/2021 🎆 🎆 We have released a new dataset M2H2: A Multimodal Multiparty

Deep Cognition and Language Research (DeCLaRe) Lab 1k Dec 30, 2022
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).

CapsGNN ⠀⠀ A PyTorch implementation of Capsule Graph Neural Network (ICLR 2019). Abstract The high-quality node embeddings learned from the Graph Neur

Benedek Rozemberczki 1.2k Jan 02, 2023
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
The project covers common metrics for super-resolution performance evaluation.

Super-Resolution Performance Evaluation Code The project covers common metrics for super-resolution performance evaluation. Metrics support The script

xmy 10 Aug 03, 2022
Deep Multimodal Neural Architecture Search

MMNas: Deep Multimodal Neural Architecture Search This repository corresponds to the PyTorch implementation of the MMnas for visual question answering

Vision and Language Group@ MIL 23 Dec 21, 2022
python 93% acc. CNN Dogs Vs Cats ( Pytorch )

English | 简体中文(测试中...敬请期待) Cnn-Classification-Dog-Vs-Cat 猫狗辨别 (pytorch版本) CNN Resnet18 的猫狗分类器,基于ResNet及其变体网路系列,对于一般的图像识别任务表现优异,模型精准度高达93%(小型样本)。 项目制作于

apple ye 1 May 22, 2022
A style-based Quantum Generative Adversarial Network

Style-qGAN A style based Quantum Generative Adversarial Network (style-qGAN) model for Monte Carlo event generation. Tutorial We have prepared a noteb

9 Nov 24, 2022
Deep Inertial Prediction (DIPr)

Deep Inertial Prediction For more information and context related to this repo, please refer to our website. Getting Started (non Docker) Note: you wi

Arcturus Industries 12 Nov 11, 2022
RoadMap and preparation material for Machine Learning and Data Science - From beginner to expert.

ML-and-DataScience-preparation This repository has the goal to create a learning and preparation roadMap for Machine Learning Engineers and Data Scien

33 Dec 29, 2022
Audio2Face - Audio To Face With Python

Audio2Face Discription We create a project that transforms audio to blendshape w

FACEGOOD 724 Dec 26, 2022
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

96 Dec 27, 2022