CVAT is free, online, interactive video and image annotation tool for computer vision

Overview

Computer Vision Annotation Tool (CVAT)

CI Gitter chat Coverage Status server pulls ui pulls DOI

CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our team to annotate million of objects with different properties. Many UI and UX decisions are based on feedbacks from professional data annotation team. Try it online cvat.org.

CVAT screenshot

Documentation

Screencasts

Supported annotation formats

Format selection is possible after clicking on the Upload annotation and Dump annotation buttons. Datumaro dataset framework allows additional dataset transformations via its command line tool and Python library.

For more information about supported formats look at the documentation.

Annotation format Import Export
CVAT for images X X
CVAT for a video X X
Datumaro X
PASCAL VOC X X
Segmentation masks from PASCAL VOC X X
YOLO X X
MS COCO Object Detection X X
TFrecord X X
MOT X X
LabelMe 3.0 X X
ImageNet X X
CamVid X X
WIDER Face X X
VGGFace2 X X
Market-1501 X X
ICDAR13/15 X X
Open Images V6 X X
Cityscapes X X
KITTI X X
LFW X X

Deep learning serverless functions for automatic labeling

Name Type Framework CPU GPU
Deep Extreme Cut interactor OpenVINO X
Faster RCNN detector OpenVINO X
Mask RCNN detector OpenVINO X
YOLO v3 detector OpenVINO X
Object reidentification reid OpenVINO X
Semantic segmentation for ADAS detector OpenVINO X
Text detection v4 detector OpenVINO X
YOLO v5 detector PyTorch X
SiamMask tracker PyTorch X X
f-BRS interactor PyTorch X
HRNet interactor PyTorch X
Inside-Outside Guidance interactor PyTorch X
Faster RCNN detector TensorFlow X X
Mask RCNN detector TensorFlow X X
RetinaNet detector PyTorch X X

Online demo: cvat.org

This is an online demo with the latest version of the annotation tool. Try it online without local installation. Only own or assigned tasks are visible to users.

Disabled features:

Limitations:

  • No more than 10 tasks per user
  • Uploaded data is limited to 500Mb

Prebuilt Docker images

Prebuilt docker images for CVAT releases are available on Docker Hub:

LICENSE

Code released under the MIT License.

This software uses LGPL licensed libraries from the FFmpeg project. The exact steps on how FFmpeg was configured and compiled can be found in the Dockerfile.

FFmpeg is an open source framework licensed under LGPL and GPL. See https://www.ffmpeg.org/legal.html. You are solely responsible for determining if your use of FFmpeg requires any additional licenses. Intel is not responsible for obtaining any such licenses, nor liable for any licensing fees due in connection with your use of FFmpeg.

Partners

  • Onepanel is an open source vision AI platform that fully integrates CVAT with scalable data processing and parallelized training pipelines.
  • DataIsKey uses CVAT as their prime data labeling tool to offer annotation services for projects of any size.
  • Human Protocol uses CVAT as a way of adding annotation service to the human protocol.
  • Cogito Tech LLC, a Human-in-the-Loop Workforce Solutions Provider, used CVAT in annotation of about 5,000 images for a brand operating in the fashion segment.
  • FiftyOne is an open-source dataset curation and model analysis tool for visualizing, exploring, and improving computer vision datasets and models that is tightly integrated with CVAT for annotation and label refinement.

Questions

CVAT usage related questions or unclear concepts can be posted in our Gitter chat for quick replies from contributors and other users.

However, if you have a feature request or a bug report that can reproduced, feel free to open an issue (with steps to reproduce the bug if it's a bug report) on GitHub* issues.

If you are not sure or just want to browse other users common questions, Gitter chat is the way to go.

Other ways to ask questions and get our support:

Links

Comments
  • Cuboid annotation

    Cuboid annotation

    Addressing https://github.com/opencv/cvat/issues/147

    Cuboid Annotation:

    Description

    This PR adds fully functional cuboid annotation within CVAT. The cuboid are fully integrated within CVAT and support regular features from other shapes such as copy-pasting, labels, etc.

    Usage

    Cuboid are created just like bounding boxes, simply select the cuboid shape in the UI and create. The cuboids may be edited by dragging certain edges, points or faces. Editing is constrained by a two point perspective model, that is, non-vertical edges all converge on either one of two vanishing points.

    You may see these vanishing points in action by checking cuboid projection lines checkbox in the bottom left of the player.

    Annotation dump

    Points in the dump are ordered by vertical edges, starting with the leftmost edge and moving in counter clockwise order. The first point of each edge is always the top one.

    For example, the first point would be the top point of the leftmost edge and the second point would be the bottom point of the leftmost edge and the third point would be the top point of the edge in the front of the cuboid.

    Known issues

    • Currently, this build only supports dumping and uploading in cvat-xml format.
    • The copy-paste buffer cuboid is just a polyline, but is still usable

    The cuboids have been developed with the feedback of an in-house annotation team. This feature is fully functional but of course any feedback and or comment is appreciated!

    enhancement 
    opened by HollowTube 71
  • CVAT-3D milestone6

    CVAT-3D milestone6

    Hi @bsekachev , @nmanovic , @zhiltsov-max

    CVAT 3D Milestone 6 changes:

    Added support for Dump annotations, Export Annotations and Upload annotations in PCD and Kitti formats. The code changes are only in 4 files i.e bindings.py, registry.py and added new datatsets pointcloud.py and velodynepoint.py.

    The rest of the files are from M5 base branch, was waiting for it to be merged but since comments are in progress created one for M6.

    If you're unsure about any of these, don't hesitate to ask. We're here to help! -->

    • [x] I submit my changes into the develop branch

    • We shall add the changes in CHANGELOG after M5 code is merged .

    • (https://github.com/opencv/cvat/blob/develop/CHANGELOG.md) file Datumaro PR - https://github.com/openvinotoolkit/datumaro/pull/245

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.

    • [x] I have updated the license header for each file

    opened by manasars 69
  • Automatic Annotation

    Automatic Annotation

    I deployed my custom model for automatic annotation and it seems perfect. It shows the inference progress bar and the docker logs show normal. However, the Annotation did not show on my image dataset in CVAT.

    What can I do? image

    More info The following is what I send to CVAT. In other words, it is the context.Response part. <class 'list'>---[{'confidence': '0.4071217', 'label': '0.0', 'points': [360.0, 50.0, 1263.0, 720.0], 'type': 'rectangle'}]

    bug 
    opened by QuarTerll 64
  • Create multiple tasks when uploading multiple videos

    Create multiple tasks when uploading multiple videos

    Motivation and context

    Resolve #916

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 61
  • Added Cypress testing for feature: Multiple tasks creating from videos

    Added Cypress testing for feature: Multiple tasks creating from videos

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by AlexeyAlexeevXperienceAI 60
  • Adding Kuberenetes templates and deployment guide

    Adding Kuberenetes templates and deployment guide

    Motivation and context

    The topic was raised a couple of times in issues like #1087 . Since kubernetes is widely use easy deployment into the kubernetes environment would provide great value to the community and help to get cvat to a wider audience.

    Special due to changes like #1641 its now way easier to deploy cvat in a k8s environment.

    How has this been tested?

    I deployed this in a couple of namespaces with in our cluster (with and without nvida gpu). Furthermore i did not do any changes to the code, therefore the only real issue was networking. Since i was following the docker-compose.yml closely there where no real challges

    Checklist

    License

    • [X] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [X] I have updated the license header for each file (see an example below)
    # Copyright (C) 2020 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by Langhalsdino 51
  • CVAT 3D Milestone-5

    CVAT 3D Milestone-5

    CVAT-3D-Milestone5 : Implement cuboid operations in right side bar and save annotations Changes include: Implemented displaying list of annotated objects in right side bar. Implemented Switch lock property, switch hidden, pinned, occluded property of objects. Implemented remove and save annotations Implemented Appearance tab to change opacity and outlined borders.

    Test Cases: Manual Unit testing done locally. Existing test cases work as expected. System Test cases will be shared.

    [x ] I submit my changes into the develop branch

    [ x] I submit my code changes under the same MIT License that covers the project.

    opened by manasars 48
  • Deleted frames

    Deleted frames

    Resolve #4235 Resolve #3000

    Motivation and context

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 44
  • Added paint brush tools

    Added paint brush tools

    Motivation and context

    Resolved #1849 Resolved #4868

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2022 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by bsekachev 42
  • Project export

    Project export

    Motivation and context

    PR provides the ability to export the project as a dataset or annotation, reworked export menus.

    Resolve #2911 Resolve #2678 Related #1278

    изображение

    TODOs:

    • [x] Fix image exporting in some cases
    • [x] Add support for CVAT formats
    • [x] Add server unit tests
    • [x] Add UI support for exporting project and rework export task dataset menus

    How has this been tested?

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file ~~- [ ] I have updated the documentation accordingly~~
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    • [x] I have updated the license header for each file (see an example below)
    # Copyright (C) 2021 Intel Corporation
    #
    # SPDX-License-Identifier: MIT
    
    opened by ActiveChooN 38
  • Tracking functionality for bounding boxes

    Tracking functionality for bounding boxes

    Hi, We are adding several features into CVAT and will be open-sourced. We might need your advice along the way, just wanted to know if you can help. Currently, we are trying to change the interpolation. As of now, interpolation just puts bounding box in the remaining frames at the same position as it is in the first frame. We are trying to change that and add tracking there. Since the code base is huge I am unable to understand the exact flow of process.

    For now, say instead of constant coordinates I want to shift box to right a little bit (i.e 10 pixels). I guess its trivial task. Just need your help regarding the same, if possible. Thanks

    enhancement 
    opened by savan77 34
  • [WIP] Fix pagination in some endpoints

    [WIP] Fix pagination in some endpoints

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by zhiltsov-max 0
  • [WIP] Improve error messages when limits reached

    [WIP] Improve error messages when limits reached

    Motivation and context

    How has this been tested?

    Checklist

    • [ ] I submit my changes into the develop branch
    • [ ] I have added a description of my changes into CHANGELOG file
    • [ ] I have updated the documentation accordingly
    • [ ] I have added tests to cover my changes
    • [ ] I have linked related issues (read github docs)
    • [ ] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    License

    • [ ] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    opened by kirill-sizov 0
  • It is very slow when more than 10 people do their job together.

    It is very slow when more than 10 people do their job together.

    Hi, developer,

    It is very slow when more than 10 people do their job together, This question bothers me very much, how to resolve it? btw, The memory of the server is enough!

    opened by YuanNBB 0
  • YoloV7 serverless detector feature for auto annotation

    YoloV7 serverless detector feature for auto annotation

    Motivation and context

    Integration of YOLOv7 as a serverless nuclio function that can be used for auto-labeling. YoloV7 is the SOTA at the time of this PR therefore it would make sense to support it in CVAT. The integration is quite simple into CVAT as docker based on Ultralytics YoloV5 with coco pretrained model (https://github.com/WongKinYiu/yolov7) and a docker image (https://hub.docker.com/r/ultralytics/yolov5).

    related issue: #5548

    How has this been tested?

    Automatic annotation was run using YOLOv7 on a custom dataset. The serverless function was deployed using

    nuctl deploy --project-name cvat \
      --path serverless/onnx/WongKinYiu/yolov7/nuclio \
      --volume `pwd`/serverless/common:/opt/nuclio/common \
      --platform local
    

    Then using the 'Automatic annotation' action the function was tested and the auto-generated labels were controlled to check that no coordinates misfit is happening.

    Checklist

    • [x] I submit my changes into the develop branch
    • [x] I have added a description of my changes into CHANGELOG file
    • [x] I have updated the documentation accordingly
    • [x] I have added tests to cover my changes
    • [x] I have linked related issues (read github docs)
    • [x] I have increased versions of npm packages if it is necessary (cvat-canvas, cvat-core, cvat-data and cvat-ui)

    Use custom model:

    1. Export your model with NMS for image resolution of 640x640 (preferable).
    2. Copy your custom model yolov7-custom.onnx to /serverless/common
    3. Modify function.yaml file according to your labels.
    4. Modify model_handler.py as follow:
     self.model_path = "yolov7-custom.onnx"
    

    License

    • [x] I submit my code changes under the same MIT License that covers the project. Feel free to contact the maintainers if that's a concern.
    models 
    opened by hardikdava 2
  • Mmdetection MaskRCNN serverless support for semi-automatic annotation

    Mmdetection MaskRCNN serverless support for semi-automatic annotation

    I have created a serverless support for semi-automatic annotation using Mmdetection implementation of MaskRCNN, I believe this would be helpful for anyone who would like to use any of Mmdetection's implementations in building a serveless function. Do let me know if it is needed. Thank you

    opened by michael-selasi-dzamesi 2
  • YoloV7 serverless support for automatic annotation

    YoloV7 serverless support for automatic annotation

    It would be great to have support for YoloV7 object detector for automatic annotation. I already created such feature using ONNX backend. Please let me know if it is needed so other can also use it. How can I provide PR for it?

    opened by hardikdava 1
Releases(v2.3.0)
Owner
OpenVINO Toolkit
OpenVINO Toolkit
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
Python3 Implementation of (Subspace Constrained) Mean Shift Algorithm in Euclidean and Directional Product Spaces

(Subspace Constrained) Mean Shift Algorithms in Euclidean and/or Directional Product Spaces This repository contains Python3 code for the mean shift a

Yikun Zhang 0 Oct 19, 2021
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression

Quantile Regression DQN Quantile Regression DQN a Minimal Working Example, Distributional Reinforcement Learning with Quantile Regression (https://arx

Arsenii Senya Ashukha 80 Sep 17, 2022
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
AntiFuzz: Impeding Fuzzing Audits of Binary Executables

AntiFuzz: Impeding Fuzzing Audits of Binary Executables Get the paper here: https://www.usenix.org/system/files/sec19-guler.pdf Usage: The python scri

Chair for Sys­tems Se­cu­ri­ty 88 Dec 21, 2022
Meaningful titles for tabs and PDF downloads! Also supports tab search.

arxiv-utils If you are a researcher that reads a lot on ArXiv, you'll benefit a lot from this web extension. Renames the title of PDF page to the pape

Johnson 174 Dec 20, 2022
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
NAACL'2021: Factual Probing Is [MASK]: Learning vs. Learning to Recall

OptiPrompt This is the PyTorch implementation of the paper Factual Probing Is [MASK]: Learning vs. Learning to Recall. We propose OptiPrompt, a simple

Princeton Natural Language Processing 150 Dec 20, 2022
SoGCN: Second-Order Graph Convolutional Networks

SoGCN: Second-Order Graph Convolutional Networks This is the authors' implementation of paper "SoGCN: Second-Order Graph Convolutional Networks" in Py

Yuehao 7 Aug 16, 2022
Molecular Sets (MOSES): A benchmarking platform for molecular generation models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

Neelesh C A 3 Oct 14, 2022
Neural Caption Generator with Attention

Neural Caption Generator with Attention Tensorflow implementation of "Show

Taeksoo Kim 510 Nov 30, 2022
RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP

[Paper] [Хабр] [Model Card] [Colab] [Kaggle] RuDOLPH 🦌 🎄 ☃️ One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP Russian Diffusio

AI Forever 232 Jan 04, 2023
Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow

Perceiver This Python package implements Perceiver: General Perception with Iterative Attention by Andrew Jaegle in TensorFlow. This model builds on t

Rishit Dagli 84 Oct 15, 2022
A scikit-learn-compatible module for estimating prediction intervals.

MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals (or prediction sets) using your favourit

588 Jan 04, 2023
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023
The Wearables Development Toolkit - a development environment for activity recognition applications with sensor signals

Wearables Development Toolkit (WDK) The Wearables Development Toolkit (WDK) is a framework and set of tools to facilitate the iterative development of

Juan Haladjian 114 Nov 27, 2022
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
Audio Visual Emotion Recognition using TDA

Audio Visual Emotion Recognition using TDA RAVDESS database with two datasets analyzed: Video and Audio dataset: Audio-Dataset: https://www.kaggle.com

Combinatorial Image Analysis research group 3 May 11, 2022