Powerful and efficient Computer Vision Annotation Tool (CVAT)

Overview

Computer Vision Annotation Tool (CVAT)

CI Gitter chat Coverage Status server pulls ui pulls DOI

CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our team to annotate million of objects with different properties. Many UI and UX decisions are based on feedbacks from professional data annotation team. Try it online cvat.org.

CVAT screenshot

Documentation

Screencasts

Supported annotation formats

Format selection is possible after clicking on the Upload annotation and Dump annotation buttons. Datumaro dataset framework allows additional dataset transformations via its command line tool and Python library.

For more information about supported formats look at the documentation.

Annotation format Import Export
CVAT for images X X
CVAT for a video X X
Datumaro X
PASCAL VOC X X
Segmentation masks from PASCAL VOC X X
YOLO X X
MS COCO Object Detection X X
TFrecord X X
MOT X X
LabelMe 3.0 X X
ImageNet X X
CamVid X X
WIDER Face X X
VGGFace2 X X
Market-1501 X X
ICDAR13/15 X X

Deep learning serverless functions for automatic labeling

Name Type Framework CPU GPU
Deep Extreme Cut interactor OpenVINO X
Faster RCNN detector OpenVINO X
Mask RCNN detector OpenVINO X
YOLO v3 detector OpenVINO X
Object reidentification reid OpenVINO X
Semantic segmentation for ADAS detector OpenVINO X
Text detection v4 detector OpenVINO X
SiamMask tracker PyTorch X X
f-BRS interactor PyTorch X
HRNet interactor PyTorch X
Inside-Outside Guidance interactor PyTorch X
Faster RCNN detector TensorFlow X X
Mask RCNN detector TensorFlow X X
RetinaNet detector PyTorch X X

Online demo: cvat.org

This is an online demo with the latest version of the annotation tool. Try it online without local installation. Only own or assigned tasks are visible to users.

Disabled features:

Limitations:

  • No more than 10 tasks per user
  • Uploaded data is limited to 500Mb

Prebuilt Docker images

Prebuilt docker images for CVAT releases are available on Docker Hub:

LICENSE

Code released under the MIT License.

This software uses LGPL licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.

FFmpeg is an open source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. Intel is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.

Partners

  • Onepanel is an open source vision AI platform that fully integrates CVAT with scalable data processing and parallelized training pipelines.
  • DataIsKey uses CVAT as their prime data labeling tool to offer annotation services for projects of any size.
  • Human Protocol uses CVAT as a way of adding annotation service to the human protocol.
  • Cogito Tech LLC, a Human-in-the-Loop Workforce Solutions Provider, used CVAT in annotation of about 5,000 images for a brand operating in the fashion segment.

Questions

CVAT usage related questions or unclear concepts can be posted in our Gitter chat for quick replies from contributors and other users.

However, if you have a feature request or a bug report that can reproduced, feel free to open an issue (with steps to reproduce the bug if it's a bug report) on GitHub* issues.

If you are not sure or just want to browse other users common questions, Gitter chat is the way to go.

Other ways to ask questions and get our support:

Links

Comments
  • Cuboid annotation

    Cuboid annotation

    Addressing https://github.com/opencv/cvat/issues/147

    Cuboid Annotation:

    Description

    This PR adds fully functional cuboid annotation within CVAT. The cuboid are fully integrated within CVAT and support regular features from other shapes such as copy-pasting, labels, etc.

    Usage

    Cuboid are created just like bounding boxes, simply select the cuboid shape in the UI and create. The cuboids may be edited by dragging certain edges, points or faces. Editing is constrained by a two point perspective model, that is, non-vertical edges all converge on either one of two vanishing points.

    You may see these vanishing points in action by checking cuboid projection lines checkbox in the bottom left of the player.

    Annotation dump

    Points in the dump are ordered by vertical edges, starting with the leftmost edge and moving in counter clockwise order. The first point of each edge is always the top one.

    For example, the first point would be the top point of the leftmost edge and the second point would be the bottom point of the leftmost edge and the third point would be the top point of the edge in the front of the cuboid.

    Known issues

    • Currently, this build only supports dumping and uploading in cvat-xml format.
    • The copy-paste buffer cuboid is just a polyline, but is still usable

    The cuboids have been developed with the feedback of an in-house annotation team. This feature is fully functional but of course any feedback and or comment is appreciated!

    enhancement 
    opened by HollowTube 71
  • CVAT-3D milestone6

    CVAT-3D milestone6

    Hi @bsekachev , @nmanovic , @zhiltsov-max

    CVAT 3D Milestone 6 changes:

    Added support for Dump annotations, Export Annotations and Upload annotations in PCD and Kitti formats. The code changes are only in 4 files i.e bindings.py, registry.py and added new datatsets pointcloud.py and velodynepoint.py.

    The rest of the files are from M5 base branch, was waiting for it to be merged but since comments are in progress created one for M6.

    If you're unsure about any of these, don't hesitate to ask. We're here to help! -->

    • [x] I submit my changes into the develop branch

    • We shall add the changes in CHANGELOG after M5 code is merged .

    • (https://github.com/opencv/cvat/blob/develop/CHANGELOG.md) file Datumaro PR - https://github.com/openvinotoolkit/datumaro/pull/245

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.

    • [x] I have updated the license header for each file

    opened by manasars 69
  • Automatic Annotation

    Automatic Annotation

    I deployed my custom model for automatic annotation and it seems perfect. It shows the inference progress bar and the docker logs show normal. However, the Annotation did not show on my image dataset in CVAT.

    What can I do? image

    More info The following is what I send to CVAT. In other words, it is the context.Response part. <class 'list'>---[{'confidence': '0.4071217', 'label': '0.0', 'points': [360.0, 50.0, 1263.0, 720.0], 'type': 'rectangle'}]

    bug 
    opened by QuarTerll 64
  • Create multiple tasks when uploading multiple videos

    Create multiple tasks when uploading multiple videos

    Motivation and context

    Resolve #916

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 61
  • Added Cypress testing for feature: Multiple tasks creating from videos

    Added Cypress testing for feature: Multiple tasks creating from videos

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 60
  • Adding Kuberenetes templates and deployment guide

    Adding Kuberenetes templates and deployment guide

    Motivation and context

    The topic was raised a couple of times in issues like #1087 . Since kubernetes is widely use easy deployment into the kubernetes environment would provide great value to the community and help to get cvat to a wider audience.

    Special due to changes like #1641 its now way easier to deploy cvat in a k8s environment.

    How has this been tested?

    I deployed this in a couple of namespaces with in our cluster (with and without nvida gpu). Furthermore i did not do any changes to the code, therefore the only real issue was networking. Since i was following the docker-compose.yml closely there where no real challges

    Checklist

    License

    • [X] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [X] I have updated the license header for each file (see an example below)
    # Copyright (C) 2020 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by Langhalsdino 51
  • CVAT 3D Milestone-5

    CVAT 3D Milestone-5

    CVAT-3D-Milestone5 : Implement cuboid operations in right side bar and save annotations Changes include: Implemented displaying list of annotated objects in right side bar. Implemented Switch lock property, switch hidden, pinned, occluded property of objects. Implemented remove and save annotations Implemented Appearance tab to change opacity and outlined borders.

    Test Cases: Manual Unit testing done locally. Existing test cases work as expected. System Test cases will be shared.

    [x ] I submit my changes into the develop branch

    [ x] I submit my code changes under the same MIT License that covers the project.

    opened by manasars 48
  • Deleted frames

    Deleted frames

    Resolve #4235 Resolve #3000

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 44
  • Added paint brush tools

    Added paint brush tools

    Motivation and context

    Resolved #1849 Resolved #4868

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by bsekachev 42
  • Project export

    Project export

    Motivation and context

    PR provides the ability to export the project as a dataset or annotation, reworked export menus.

    Resolve #2911 Resolve #2678 Related #1278

    изображение

    TODOs:

    • [x] Fix image exporting in some cases
    • [x] Add support for CVAT formats
    • [x] Add server unit tests
    • [x] Add UI support for exporting project and rework export task dataset menus

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2021 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 38
  • Tracking functionality for bounding boxes

    Tracking functionality for bounding boxes

    Hi, We are adding several features into CVAT and will be open-sourced. We might need your advice along the way, just wanted to know if you can help. Currently, we are trying to change the interpolation. As of now, interpolation just puts bounding box in the remaining frames at the same position as it is in the first frame. We are trying to change that and add tracking there. Since the code base is huge I am unable to understand the exact flow of process.

    For now, say instead of constant coordinates I want to shift box to right a little bit (i.e 10 pixels). I guess its trivial task. Just need your help regarding the same, if possible. Thanks

    enhancement 
    opened by savan77 34
  • Adjust Windows Installation Instructions to account for Nuclio issue#1821

    Adjust Windows Installation Instructions to account for Nuclio issue#1821

    Motivation and context

    In my understanding of https://github.com/nuclio/nuclio/issues/1821, the Nuctl (1.8.14) CLI is looking for a path that is only valid on a Linux environment, which it does not find when running via Git Bash (even when using the Windows version of Nuctl). However, installing CVAT onto a Linux VM allows Nuctl to locate this path and operate normally.

    (I am still learning how to use GitHub as far as pull requests / forks / etc work, sorry if this is not the right way to approach this change. Please let me know if I've missed something important.)

    How has this been tested?

    This is only a change to instructions, but I did test this on multiple machines . As long as the machine is capable of running a Linux kernel it shouldn't run into any issues.

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [x] I have updated the documentation accordingly (Purely documentation changed)
    • [ ] ~~I have added tests to cover my changes~~ (Does not change code)
    • [ ] ~~I have linked related issues (read github docs)~~ (This doesn't resolve the root issue, rather just works around it; doesn't make sense to cause an automatic close)
    • [ ] ~~I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)~~ (Was not necessary)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AstronomyGuy 0
  • [WIP] Fix pagination in some endpoints

    [WIP] Fix pagination in some endpoints

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by zhiltsov-max 0
  • [WIP] Improve error messages when limits reached

    [WIP] Improve error messages when limits reached

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by kirill-sizov 0
  • It is very slow when more than 10 people do their job together.

    It is very slow when more than 10 people do their job together.

    Hi, developer,

    It is very slow when more than 10 people do their job together, This question bothers me very much, how to resolve it? btw, The memory of the server is enough!

    opened by YuanNBB 0
  • YoloV7 serverless detector feature for auto annotation

    YoloV7 serverless detector feature for auto annotation

    Motivation and context

    Integration of YOLOv7 as a serverless nuclio function that can be used for auto-labeling. YoloV7 is the SOTA at the time of this PR therefore it would make sense to support it in CVAT. The integration is quite simple into CVAT as docker based on Ultralytics YoloV5 with coco pretrained model (https://github.com/WongKinYiu/yolov7) and a docker image (https://hub.docker.com/r/ultralytics/yolov5).

    related issue: #5548

    How has this been tested?

    Automatic annotation was run using YOLOv7 on a custom dataset. The serverless function was deployed using

    nuctl deploy --project-name cvat \
      --path serverless/onnx/WongKinYiu/yolov7/nuclio \
      --volume `pwd`/serverless/common:/opt/nuclio/common \
      --platform local
    

    Then using the 'Automatic annotation' action the function was tested and the auto-generated labels were controlled to check that no coordinates misfit is happening.

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [x] I have updated the documentation accordingly
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    Use custom model:

    1. Export your model with NMS for image resolution of 640x640 (preferable).
    2. Copy your custom model yolov7-custom.onnx to /serverless/common
    3. Modify function.yaml file according to your labels.
    4. Modify model_handler.py as follow:
     self.model_path = "yolov7-custom.onnx"
    

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    models 
    opened by hardikdava 2
  • Mmdetection MaskRCNN serverless support for semi-automatic annotation

    Mmdetection MaskRCNN serverless support for semi-automatic annotation

    I have created a serverless support for semi-automatic annotation using Mmdetection implementation of MaskRCNN, I believe this would be helpful for anyone who would like to use any of Mmdetection's implementations in building a serveless function. Do let me know if it is needed. Thank you

    opened by michael-selasi-dzamesi 2
Releases(v2.3.0)
Owner
OpenVINO Toolkit
OpenVINO Toolkit
Perform Linear Classification with Multi-way Data

MultiwayClassification This is an R package to perform linear classification for data with multi-way structure. The distance-weighted discrimination (

Eric F. Lock 2 Dec 15, 2020
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).

SentiBERT Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020). https://arxiv.org/abs/20

Da Yin 66 Aug 13, 2022
Non-stationary GP package written from scratch in PyTorch

NSGP-Torch Examples gpytorch model with skgpytorch # Import packages import torch from regdata import NonStat2D from gpytorch.kernels import RBFKernel

Zeel B Patel 1 Mar 06, 2022
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
Extreme Dynamic Classifier Chains - XGBoost for Multi-label Classification

Extreme Dynamic Classifier Chains Classifier chains is a key technique in multi-label classification, sinceit allows to consider label dependencies ef

6 Oct 08, 2022
a delightful machine learning tool that allows you to train, test and use models without writing code

igel A delightful machine learning tool that allows you to train/fit, test and use models without writing code Note I'm also working on a GUI desktop

Nidhal Baccouri 3k Jan 05, 2023
Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21

MonoFlex Released code for Objects are Different: Flexible Monocular 3D Object Detection, CVPR21. Work in progress. Installation This repo is tested w

Yunpeng 169 Dec 06, 2022
Spatiotemporal resampling methods for mlr3

mlr3spatiotempcv Package website: release | dev Spatiotemporal resampling methods for mlr3. This package extends the mlr3 package framework with spati

45 Nov 21, 2022
JAXDL: JAX (Flax) Deep Learning Library

JAXDL: JAX (Flax) Deep Learning Library Simple and clean JAX/Flax deep learning algorithm implementations: Soft-Actor-Critic (arXiv:1812.05905) Transf

Patrick Hart 4 Nov 27, 2022
Simple implementation of Mobile-Former on Pytorch

Simple-implementation-of-Mobile-Former At present, only the model but no trained. There may be some bug in the code, and some details may be different

Acheung 103 Dec 31, 2022
DecoupledNet is semantic segmentation system which using heterogeneous annotations

DecoupledNet: Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation Created by Seunghoon Hong, Hyeonwoo Noh and Bohyung Han at POSTE

Hyeonwoo Noh 74 Sep 22, 2021
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference

DeeBERT This is the code base for the paper DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference. Code in this repository is also available

Castorini 132 Nov 14, 2022
Code for Massive-scale Decoding for Text Generation using Lattices

Massive-scale Decoding for Text Generation using Lattices Jiacheng Xu, Greg Durrett TL;DR: a new search algorithm to construct lattices encoding many

Jiacheng Xu 37 Dec 18, 2022
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
The implementation of "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Band Speech Enhancement"

SF-Net for fullband SE This is the repo of the manuscript "Optimizing Shoulder to Shoulder: A Coordinated Sub-Band Fusion Model for Real-Time Full-Ban

Guochen Yu 36 Dec 02, 2022
Pytorch implementation for Patient Knowledge Distillation for BERT Model Compression

Patient Knowledge Distillation for BERT Model Compression Knowledge distillation for BERT model Installation Run command below to install the environm

Siqi 180 Dec 19, 2022
Fre-GAN: Adversarial Frequency-consistent Audio Synthesis

Fre-GAN Vocoder Fre-GAN: Adversarial Frequency-consistent Audio Synthesis Training: python train.py --config config.json Citation: @misc{kim2021frega

Rishikesh (ऋषिकेश) 93 Dec 17, 2022
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

1.1k Dec 27, 2022
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022