This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

Overview

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Skin Lesion detection using YOLO

This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

Predictions

YOLOv3

YOLO stands for "You Only Look Once" which uses Convolutional Neural Networks for Object Detection. On a single image, YOLO may detect multiple objects. It implies that, in addition to predicting object classes, YOLO also recognises its positions in the image. The entire image is processed by a single Neural Network in YOLO. The picture is divided into regions using this Neural Network, which generates probabilities for each region. YOLO predicts multiple bounding boxes that cover some regions of the image and then based on the probabilities, picks the best one.

Architecture of YOLOv3:

Architecture of YOLOv3

  • YOLOv3 has a total of 106 layers where detections are made at 82, 94 and 106 layers.
  • It consists of a residual blocks, skip connections and up-sampling.
  • Each convolutional layer is followed by batch normalization layer and Leaky ReLU activation function.
  • There are no pooling layers, but instead, additional convolutional layers with stride 2, are used to down-sample feature maps.

Input:

Input images themselves can be of any size, there is no need to resize them before feeding to the network. However, all the images must be stored in a single folder. In the same folder, there should be a text file, one for each image(with the same file name), containing the "true" annotations of the bounding box in YOLOv3 format i.e.,


    
     
      
       
        
       
      
     
    
   

where,

  • class id = label index of the class to be annotated
  • Xo = X coordinate of the bounding box’s centre
  • Yo = Y coordinate of the bounding box’s centre
  • W = Width of the bounding box
  • H = Height of the bounding box
  • X = Width of the image
  • Y = Height of the image

For multiple objects in the same image, this annotation is saved line-by-line for each object.

Steps:

Note: I have used Google Colab which supports Linux commands. The steps for running it on local windows computer is different.

  1. Create "true" annotations using Annotate_YOLO.py which takes in segmented(binary) images as input and returns text file for every image, labelled in the YOLO format.

  2. Clone darknet from AlexeyAB's GitHub repository, adjust the Makefile to enable OPENCV and GPU and then build darknet.

    # Clone darknet repo
    !git clone https://github.com/AlexeyAB/darknet
    
    # Change makefile to have GPU and OPENCV enabled
    %cd darknet
    !chmod +x ./darknet
    !sed -i 's/OPENCV=0/OPENCV=1/' Makefile
    !sed -i 's/GPU=0/GPU=1/' Makefile
    !sed -i 's/CUDNN=0/CUDNN=1/' Makefile
    !sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
    
    # To use the darknet executable file
    !make
  3. Download the pre-trained YOLO weights from darknet. It is trained on a coco dataset consisting of 80 classes.

    !wget https://pjreddie.com/media/files/darknet53.conv.74
    
  4. Define the helper function as in Helper.py that is used to display images.

  5. Split the dataset into train, validation and test set (including the labels) and store it in darknet/data folder with filenames "obj", "valid", and "test" repectively. In my case, total images = 2594 out of which,

    Train = 2094 images; Validation = 488 images; Test = 12 images.

  6. Create obj.names consisting of class names (one class name per line) and also create obj.data that points to the file paths of train data, validation data and backup folder which will store the weights of the model trained on our custom dataset.

  7. Tune hyper-parameters by creating custom config file in "cfg" folder which is inside "darknet" folder. Change the following parameters in yolov3.clf and save it as yolov3-custom.cfg:

    The parameters are chosen by considering the following:

    - max_batches = (# of classes) * 2000 --> [but no less than 4000]
    
    - steps = (80% of max_batches), (90% of max_batches)
    
    - filters = (# of classes + 5) * 3
    
    - random = 1 to random = 0 --> [to speed up training but slightly reduce accuracy]
    

    I chose the following:

    • Training batch = 64
    • Training subdivisions = 16
    • max_batches = 4000, steps = 3200, 3600
    • classes = 1 in the three YOLO layers
    • filters = 18 in the three convolutional layers just before the YOLO layers.
  8. Create "train.txt" and "test.txt" using Generate_Train_Test.py

  9. Train the Custom Object Detector

    !./darknet detector train data/obj.data cfg/yolov3-custom.cfg darknet53.conv.74 -dont_show -map
    
    # Show the graph to review the performance of the custom object detector
    imShow('chart.png')

    The new weights will be stored in backup folder with the name yolov3-custom_best.weights after training.

  10. Check the Mean Average Precision(mAP) of the model

    !./darknet detector map data/obj.data cfg/yolov3-custom.cfg /content/drive/MyDrive/darknet/backup/yolov3-custom_best.weights
  11. Run Your Custom Object Detector

    For testing, set batch and subdivisions to 1.

    # To set our custom cfg to test mode
    %cd cfg
    !sed -i 's/batch=64/batch=1/' yolov3-custom.cfg
    !sed -i 's/subdivisions=16/subdivisions=1/' yolov3-custom.cfg
    %cd ..
    
    # To run custom detector
    # thresh flag sets threshold probability for detection
    !./darknet detector test data/obj.data cfg/yolov3-custom.cfg /content/drive/MyDrive/darknet/backup/yolov3-custom_best.weights /content/drive/MyDrive/Test_Lesion/ISIC_0000000.jpg -thresh 0.3
    
    # Show the predicted image with bounding box and its probability
    imShow('predictions.jpg')

Conclusion

The whole process looks like this:

Summary

Chart (Loss in mAP vs Iteration number):

Loss Chart

Loss Capture

The model was supposed to take 4,000 iterations to complete, however, the rate of decrease in loss is not very significant after 1000 iterations. The model is performing with similar precision even with 3000 fewer iterations which resulted in low training time (saving over 9 hours of compute time) and also there is less chance of it overfitting the data. Hence, the model was stopped pre-maturely just after 1100 iterations.

From the above chart, we can see that the average loss is 0.2545 and the mean average precision(mAP) is over 95% which is extremely good.

Assumption and dependencies

The user is assumed to have access to the ISICs dataset with colored images required for training, as well as its corresponding binary segmentation images.

Dependencies:

  • Google Colab Notebook
  • Python 3.7 on local machine
  • Python libraries: matplotlib, OpenCV2, glob
  • Darknet which is an open source neural network framework written in C and CUDA

References

Redmon, J., & Farhadi, A. (2018). YOLO: Real-Time Object Detection. Retrieved October 28, 2021, from https://pjreddie.com/darknet/yolo/

About the Author

Lalith Veerabhadrappa Badiger
The University of Queensland, Brisbane, Australia
Master of Data Science
Student ID: 46557829
Email ID: [email protected]

Owner
Lalith Veerabhadrappa Badiger
Master of Data Science
Lalith Veerabhadrappa Badiger
Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".

Detecting Twenty-thousand Classes using Image-level Supervision Detic: A Detector with image classes that can use image-level labels to easily train d

Meta Research 1.3k Jan 04, 2023
Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"

Self-Training for Neural Sequence Generation This repo includes instructions for running noisy self-training algorithms from the following paper: Revi

Junxian He 45 Dec 31, 2022
BARF: Bundle-Adjusting Neural Radiance Fields 🤮 (ICCV 2021 oral)

BARF 🤮 : Bundle-Adjusting Neural Radiance Fields Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey IEEE International Conference on Comp

Chen-Hsuan Lin 539 Dec 28, 2022
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022
Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling

Hamiltonian Dynamics with Non-Newtonian Momentum for Rapid Sampling Code for the paper: Greg Ver Steeg and Aram Galstyan. "Hamiltonian Dynamics with N

Greg Ver Steeg 25 Mar 14, 2022
Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Troyanskaya Laboratory 323 Jan 01, 2023
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

442 Dec 16, 2022
HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow

Class HiddenMarkovModel HiddenMarkovModel implements hidden Markov models with Gaussian mixtures as distributions on top of TensorFlow 2.0 Installatio

Susara Thenuwara 2 Nov 03, 2021
MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet.

Lightweight-Detection-and-KD MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet. This repo also includes detection knowledge di

Egqawkq 12 Jan 05, 2023
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
Pytorch implementation of One-Shot Affordance Detection

One-shot Affordance Detection PyTorch implementation of our one-shot affordance detection models. This repository contains PyTorch evaluation code, tr

46 Dec 12, 2022
Self-supervised Augmentation Consistency for Adapting Semantic Segmentation (CVPR 2021)

Self-supervised Augmentation Consistency for Adapting Semantic Segmentation This repository contains the official implementation of our paper: Self-su

Visual Inference Lab @TU Darmstadt 132 Dec 21, 2022
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
(JMLR' 19) A Python Toolbox for Scalable Outlier Detection (Anomaly Detection)

Python Outlier Detection (PyOD) Deployment & Documentation & Stats & License PyOD is a comprehensive and scalable Python toolkit for detecting outlyin

Yue Zhao 6.6k Jan 05, 2023
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
Dilated RNNs in pytorch

PyTorch Dilated Recurrent Neural Networks PyTorch implementation of Dilated Recurrent Neural Networks (DilatedRNN). Getting Started Installation: $ pi

Zalando Research 200 Nov 17, 2022
Portfolio Optimization and Quantitative Strategic Asset Allocation in Python

Riskfolio-Lib Quantitative Strategic Asset Allocation, Easy for Everyone. Description Riskfolio-Lib is a library for making quantitative strategic ass

Riskfolio 1.7k Jan 07, 2023
MassiveSumm: a very large-scale, very multilingual, news summarisation dataset

MassiveSumm: a very large-scale, very multilingual, news summarisation dataset This repository contains links to data and code to fetch and reproduce

Daniel Varab 19 Dec 16, 2022
Inference pipeline for our participation in the FeTA challenge 2021.

feta-inference Inference pipeline for our participation in the FeTA challenge 2021. Team name: TRABIT Installation Download the two folders in https:/

Lucas Fidon 2 Apr 13, 2022