This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes.

Overview

Polygon-Yolov5

This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes.

Section I. Description

The codes are based on Ultralytics/yolov5, and several functions are added and modified to enable polygon prediction boxes.

The modifications compared with Ultralytics/yolov5 and their brief descriptions are summarized below:

  1. data/polygon_ucas.yaml : Exemplar UCAS-AOD dataset to test the effects of polygon boxes

  2. data/images/UCAS-AOD : For the inference of polygon-yolov5s-ucas.pt

  3. models/common.py :
    3.1. class Polygon_NMS : Non-Maximum Suppression (NMS) module for Polygon Boxes
    3.2. class Polygon_AutoShape : Polygon Version of Original AutoShape, input-robust polygon model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and Polygon_NMS
    3.3. class Polygon_Detections : Polygon detections class for Polygon-YOLOv5 inference results

  4. models/polygon_yolov5s_ucas.yaml : Configuration file of polygon yolov5s for exemplar UCAS-AOD dataset

  5. models/yolo.py :
    5.1. class Polygon_Detect : Detect head for polygon yolov5 models with polygon box prediction
    5.2. class Polygon_Model : Polygon yolov5 models with polygon box prediction

  6. utils/iou_cuda : CUDA extension for iou computation of polygon boxes
    6.1. extensions.cpp : CUDA extension file
    6.2. inter_union_cuda.cu : CUDA code for computing iou of polygon boxes
    6.3. setup.py : for building CUDA extensions module polygon_inter_union_cuda, with two functions polygon_inter_union_cuda and polygon_b_inter_union_cuda

  7. utils/autoanchor.py :
    7.1. def polygon_check_anchors : Polygon version of original check_anchors
    7.2. def polygon_kmean_anchors : Create kmeans-evolved anchors from polygon-enabled training dataset, use minimum outter bounding box as approximations

  8. utils/datasets.py :
    8.1. def polygon_random_perspective : Data augmentation for datasets with polygon boxes (augmentation effects: HSV-Hue, HSV-Saturation, HSV-Value, rotation, translation, scale, shear, perspective, flip up-down, flip left-right, mosaic, mixup)
    8.2. def polygon_box_candidates : Polygon version of original box_candidates
    8.3. class Polygon_LoadImagesAndLabels : Polygon version of original LoadImagesAndLabels
    8.4. def polygon_load_mosaic : Loads images in a 4-mosaic, with polygon boxes
    8.5. def polygon_load_mosaic9 : Loads images in a 9-mosaic, with polygon boxes
    8.6. def polygon_verify_image_label : Verify one image-label pair for polygon datasets
    8.7. def create_dataloader : Has been modified to include polygon datasets

  9. utils/general.py :
    9.1. def xyxyxyxyn2xyxyxyxy : Convert normalized xyxyxyxy or segments into pixel xyxyxyxy or segments
    9.2. def polygon_segment2box : Convert 1 segment label to 1 polygon box label
    9.3. def polygon_segments2boxes : Convert segment labels to polygon box labels
    9.4. def polygon_scale_coords : Rescale polygon coords (xyxyxyxy) from img1_shape to img0_shape
    9.5. def polygon_clip_coords : Clip bounding polygon xyxyxyxy bounding boxes to image shape (height, width)
    9.6. def polygon_inter_union_cpu : iou computation (polygon) with cpu
    9.7. def polygon_box_iou : Compute iou of polygon boxes via cpu or cuda
    9.8. def polygon_b_inter_union_cpu : iou computation (polygon) with cpu for class Polygon_ComputeLoss in loss.py
    9.9. def polygon_bbox_iou : Compute iou of polygon boxes for class Polygon_ComputeLoss in loss.py via cpu or cuda
    9.10. def polygon_non_max_suppression : Runs Non-Maximum Suppression (NMS) on inference results for polygon boxes
    9.11. def polygon_nms_kernel : Non maximum suppression kernel for polygon-enabled boxes
    9.12. def order_corners : Return sorted corners for loss.py::class Polygon_ComputeLoss::build_targets

  10. utils/loss.py :
    10.1. class Polygon_ComputeLoss : Compute loss for polygon boxes

  11. utils/metrics.py :
    11.1. class Polygon_ConfusionMatrix : Polygon version of original ConfusionMatrix

  12. utils/plots.py :
    12.1. def polygon_plot_one_box : Plot one polygon box on image
    12.2. def polygon_plot_one_box_PIL : Plot one polygon box on image via PIL
    12.3. def polygon_output_to_target : Convert model output to target format (batch_id, class_id, x1, y1, x2, y2, x3, y3, x4, y4, conf)
    12.4. def polygon_plot_images : Polygon version of original plot_images
    12.5. def polygon_plot_test_txt : Polygon version of original plot_test_txt
    12.6. def polygon_plot_targets_txt : Polygon version of original plot_targets_txt
    12.7. def polygon_plot_labels : Polygon version of original plot_labels

  13. polygon_train.py : For training polygon-yolov5 models

  14. polygon_test.py : For testing polygon-yolov5 models

  15. polygon_detect.py : For detecting polygon-yolov5 models

  16. requirements.py : Added python model shapely

Section II. How Does Polygon Boxes Work? How Does Polygon Boxes Different from Axis-Aligned Boxes?

  1. build_targets in class Polygon_ComputeLoss & forward in class Polygon_Detect

2. order_corners in general.py

3. Illustrations of box loss of polygon boxes

Section III. Installation

For the CUDA extension to be successfully built without error, please use CUDA version >= 11.2. The codes have been verified in Ubuntu 16.04 with Tesla K80 GPU.

# The following codes install CUDA 11.2 from scratch on Ubuntu 16.04, if you have installed it, please ignore
# If you are using other versions of systems, please check https://tutorialforlinux.com/2019/12/01/how-to-add-cuda-repository-for-ubuntu-based-oses-2/
# Install Ubuntu kernel head
sudo apt install linux-headers-$(uname -r)

# Pinning CUDA repo wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-ubuntu1604.pin sudo mv cuda-ubuntu1604.pin /etc/apt/preferences.d/cuda-repository-pin-600
# Add CUDA GPG key sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
# Setting up CUDA repo sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/ /"
# Refresh apt repositories sudo apt update
# Installing CUDA 11.2 sudo apt install cuda-11-2 -y sudo apt install cuda-toolkit-11-2 -y
# Setting up path echo 'export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}' >> $HOME/.bashrc # You are done installing CUDA 11.2
# Check NVIDIA nvidia-smi # Update all apts sudo apt-get update sudo apt-get -y upgrade
# Begin installing python 3.7 curl -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh chmod +x ~/miniconda.sh ./miniconda.sh -b echo "PATH=~/miniconda3/bin:$PATH" >> ~/.bashrc source ~/.bashrc conda install -y python=3.7 # You are done installing python

The following codes set you up with the Polygon Yolov5.

# clone git repo
git clone https://github.com/XinzeLee/PolygonObjectDetection
cd PolygonObjectDetection/polygon-yolov5
# install python package requirements
pip install -r requirements.txt
# install CUDA extensions
cd utils/iou_cuda
python setup.py install
# cd back to polygon-yolov5 folder
cd .. && cd ..

Section IV. Polygon-Tutorial 1: Deploy the Polygon Yolov5s

Try Polygon Yolov5s Model by Following Polygon-Tutorial 1

  1. Inference
     $ python polygon_detect.py --weights polygon-yolov5s-ucas.pt --img 1024 --conf 0.75 \
         --source data/images/UCAS-AOD --iou-thres 0.4 --hide-labels

  2. Test
     $ python polygon_test.py --weights polygon-yolov5s-ucas.pt --data polygon_ucas.yaml \
         --img 1024 --iou 0.65 --task val

  3. Train
     $ python polygon_train.py --weights polygon-yolov5s-ucas.pt --cfg polygon_yolov5s_ucas.yaml \
         --data polygon_ucas.yaml --hyp hyp.ucas.yaml --img-size 1024 \
         --epochs 3 --batch-size 12 --noautoanchor --polygon --cache
  4. Performance
    4.1. Confusion Matrix

    4.2. Precision Curve

    4.3. Recall Curve

    4.4. Precision-Recall Curve

    4.5. F1 Curve

Section V. Polygon-Tutorial 2: Transform COCO Dataset to Polygon Labels Using Segmentation

Transform COCO Dataset to Polygon Labels by Following [Polygon-Tutorial 2](https://github.com/XinzeLee/PolygonObjectDetection/blob/main/polygon-yolov5/Polygon-Tutorial2.ipynb]

Transformed Exemplar Figure

Section VI. Expansion to More Than Four Corners


Section VII. References

Comments
  • NMS time limit 10.0s exceeded

    NMS time limit 10.0s exceeded

    Thanks for sharing great works!

    I am trying to train coco dataset. When calculating mAP for val data, I got bellow warning.

    WARNING: NMS time limit 10.0s exceeded

    I think when found many bbox, nms cost is too much. then time limit exceeded.

    So, I set/change conf_thres=0.1(default is 0.001), it's work no warning.

    But, I am afraid that this change will affect learning performance. What do you think?

    opened by tak-s 10
  • Strange behaviour in overlapping bounding boxes

    Strange behaviour in overlapping bounding boxes

    @XinzeLee I have an issue when there are two adjacent or overlapping objects that I want to detect. I created a diagram to give an example.

    PolygonProblem

    The objects I am trying to detect have a rectangular shape and are represented in the image by black rectangles with grey outlines. The bounding boxes predicted by the model are in red.

    As you can see, box 1 and 3 are correct and box 2 is incorrect. Increasing the confidence or decreasing the IoU threshold cause only box 2 to be visible.

    This happens in almost every case where 2 or more objects are close together.

    Any idea on the root of the problem?

    opened by AntMorais 2
  • TypeError: test() got an unexpected keyword argument 'polygon'

    TypeError: test() got an unexpected keyword argument 'polygon'

    https://github.com/XinzeLee/PolygonObjectDetection/blob/f3333f560a08b7fccba4285f0c99cd5af03dc45a/polygon-yolov5/polygon_train.py#L444

    Not defined parameter 'polygon' at test() https://github.com/XinzeLee/PolygonObjectDetection/blob/f3333f560a08b7fccba4285f0c99cd5af03dc45a/polygon-yolov5/polygon_test.py#L25

    opened by tak-s 2
  • What is polygon yolov5 mAP on UCAS dataset

    What is polygon yolov5 mAP on UCAS dataset

    Firstly, thanks for your work. I have question. Did you test poly-yolo5 model on UCAS? As I want to compare it 's result with other object detectors. There are currently some sota satellite image related object detectors 's results in this repo https://github.com/ming71/UCAS-AOD-benchmark

    opened by vpeopleonatank 2
  • about multi-scale argument during training

    about multi-scale argument during training

    Does the argument help improve mAP? I'm asking because we are generating anchors for a fixed image size. Since the multi-scale arg varies the image size by -/+50%, will it have a negative effect on mAP?

    opened by nsabir2011 1
  • Expected all tensors to be on the same device, but found at least two devices, cuda: 0 and cpu!

    Expected all tensors to be on the same device, but found at least two devices, cuda: 0 and cpu!

    When following the first tutorial on Google Colab, I am trying to run !python polygon_test.py --weights polygon-yolov5s-ucas.pt --data polygon_ucas.yaml --img 1024 --iou 0.65 --task val --device 0 as in the example. I get the following error:

    Traceback (most recent call last): File "polygon_test.py", line 325, in <module> test(**vars(opt)) File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "polygon_test.py", line 224, in test for j in (ious > iouv[index_ap50]).nonzero(as_tuple=False): RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    I have not modified any code or data and cannot figure out where the issue is. Any help would be much appreciated. Thank you!

    opened by sac3tf 1
  • custom data polygon transform using Polygon-Tutorial2.ipynb

    custom data polygon transform using Polygon-Tutorial2.ipynb

    hello @XinzeLee ,

    I'm using my custom dataset and having json file in COCO format. I'm trying to use Polygon-Tutorial2.ipynb for it. But somehow it throws an error. Can you please help me how to run and get rotated bounding boxes from polygon annoations. Thank you in advance.

    This is the error I'm getting: Traceback (most recent call last): File "C:/yolo/tranform.py", line 173, in main() File "C:/yolo/tranform.py", line 170, in main seg2poly(r'C:\Users\exp', plot=True) File "C:/yolo/tranform.py", line 62, in seg2poly img_dir = img_dir / prefix UnboundLocalError: local variable 'prefix' referenced before assignment

    opened by apanand14 1
  • How to use this along with basic yolov5

    How to use this along with basic yolov5

    image My project is reading the license plate, and I used your polygon model to detect 4 corner of the plate to transform it , then use the basic yolov5 model to detect the characters. But I got this error when use both model in 1 runtime. It's no problem if I use them separately.

    opened by NMT201 0
  • Two error

    Two error

    1. 'Upsample' object has no attribute 'recompute_scale_factor' Edit torch==1.10.0 in requirements.txt

    2. ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part. For CPU polygon_detect.py, In line numbers 813, 815 utils/general.py, Edit boxes1[i, :].view(4,2) -> boxes1[i, :].view(4,2).numpy() Edit boxes2[j, :].view(4,2) -> boxes2[j, :].view(4,2).numpy()

    opened by tdat97 0
  • Invalid sos parameters for sequential JPEG and zero box value during training

    Invalid sos parameters for sequential JPEG and zero box value during training

    image

    • i'm using jpg image and get "Invalid SOS parameters for sequential JPEG"
    • Get zero for box value ,P,R ,map when training

    image

    • how do i choose this value and if i commented it, will getting error
    opened by Aun0124 0
  • RuntimeError: result type Float can't be cast to the desired output type long int

    RuntimeError: result type Float can't be cast to the desired output type long int

    Facing error while training. Training command: !python polygon_train.py --weights yolov5s.pt --cfg polygon_yolov5s_ucas.yaml
    --data data/custom.yaml --hyp hyp.ucas.yaml --img-size 1024
    --epochs 3 --batch-size 12 --noautoanchor --polygon --cache

    image

    opened by poojatambe 0
  • when I train my own dataset.P,R,map is zero

    when I train my own dataset.P,R,map is zero

    I annotated some images,and find that this project train can successfully start,but P,R,map is zero all the time,I train 200 epoch.

    Epoch gpu_mem box obj cls total labels img_size 64/199 3.17G 0.03754 0.01328 0 0.05082 11 640: 100%|█| 21/21 [00:02<00:00, 9. Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 2/2 [00:00<00:00, all 19 0 0 0 0 0

     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
    65/199     3.17G   0.03064    0.0133         0   0.04394         9       640: 100%|█| 21/21 [00:02<00:00,  9.
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100%|█| 2/2 [00:00<00:00,
                 all         19          0          0          0          0          0
    
     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
    66/199     3.17G   0.03182   0.01224         0   0.04405        12       640:  62%|▌| 13/21 [00:01<00:00,  9.    66/199     3.17G   0.03182   0.01224         0   0.04405        12       640:  62%|▌| 13/21 [00:01<00:00,  9.
    
    opened by futureflsl 1
Releases(v1.0)
Owner
xinzelee
xinzelee
Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized Recommendations

HierarchicyBandit Introduction This is the implementation of WSDM 2022 paper : Show Me the Whole World: Towards Entire Item Space Exploration for Inte

yu song 5 Sep 09, 2022
Official implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)

Rethinking Graph Neural Architecture Search from Message-passing Intro The GNAS can automatically learn better architecture with the optimal depth of

Shaofei Cai 48 Sep 30, 2022
Source codes for "Structure-Aware Abstractive Conversation Summarization via Discourse and Action Graphs"

Structure-Aware-BART This repo contains codes for the following paper: Jiaao Chen, Diyi Yang:Structure-Aware Abstractive Conversation Summarization vi

GT-SALT 56 Dec 08, 2022
Official code for Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018)

MUC Next Check-ins Prediction via History and Friendship on Location-Based Social Networks (MDM 2018) Performance Details for Accuracy: | Dataset

Yijun Su 3 Oct 09, 2022
Multi-modal Vision Transformers Excel at Class-agnostic Object Detection

Multi-modal Vision Transformers Excel at Class-agnostic Object Detection

Muhammad Maaz 206 Jan 04, 2023
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
Final project for Intro to CS class.

Financial Analysis Web App https://share.streamlit.io/mayurk1/fin-web-app-final-project/webApp.py 1. Project Description This project is a technical a

Mayur Khanna 1 Dec 10, 2021
Training Cifar-10 Classifier Using VGG16

opevcvdl-hw3 This project uses pytorch and Qt to achieve the requirements. Version Python 3.6 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.

Kenny Cheng 3 Aug 17, 2022
Implementation of Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021)

PSWE: Pooling by Sliced-Wasserstein Embedding (NeurIPS 2021) PSWE is a permutation-invariant feature aggregation/pooling method based on sliced-Wasser

Navid Naderializadeh 3 May 06, 2022
Einshape: DSL-based reshaping library for JAX and other frameworks.

Einshape: DSL-based reshaping library for JAX and other frameworks. The jnp.einsum op provides a DSL-based unified interface to matmul and tensordot o

DeepMind 62 Nov 30, 2022
EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures

SCICAP: Scientific Figures Dataset This is the Github repo of the EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures (Hsu

Edward 26 Nov 21, 2022
Semi-supervised Implicit Scene Completion from Sparse LiDAR

Semi-supervised Implicit Scene Completion from Sparse LiDAR Paper Created by Pengfei Li, Yongliang Shi, Tianyu Liu, Hao Zhao, Guyue Zhou and YA-QIN ZH

114 Nov 30, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Winners of the Facebook Image Similarity Challenge

Winners of the Facebook Image Similarity Challenge

DrivenData 111 Jan 05, 2023
Codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

DominoSearch This is repository for codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense n

11 Sep 10, 2022
Global Filter Networks for Image Classification

Global Filter Networks for Image Classification Created by Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie Zhou This repository contains PyTorch

Yongming Rao 273 Dec 26, 2022
ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation

ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation This repository contains the source code of our paper, ESPNet (acc

Sachin Mehta 515 Dec 13, 2022
NHL 94 AI contests

nhl94-ai The end goals of this project is to: Train Models that play NHL 94 Support AI vs AI contests in NHL 94 Provide an improved AI opponent for NH

Mathieu Poliquin 2 Dec 06, 2021
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
Lightweight Python library for adding real-time object tracking to any detector.

Norfair is a customizable lightweight Python library for real-time 2D object tracking. Using Norfair, you can add tracking capabilities to any detecto

Tryolabs 1.7k Jan 05, 2023