YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4

Related tags

Deep Learningyoltv4
Overview

YOLTv4

Alt text

YOLTv4 builds upon YOLT and SIMRDWN, and updates these frameworks to use the most performant version of YOLO, YOLOv4. YOLTv4 is designed to detect objects in aerial or satellite imagery in arbitrarily large images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.

This repository is built upon the impressive work of AlexeyAB's YOLOv4 implementation, which improves both speed and detection performance compared to YOLOv3 (which is implemented in SIMRDWN). We use YOLOv4 insead of "YOLOv5", since YOLOv4 is endorsed by the original creators of YOLO, whereas "YOLOv5" is not; furthermore YOLOv4 appears to have superior performance.

Below, we provide examples of how to use this repository with the open-source Rareplanes dataset.


Running YOLTv4


0. Installation

YOLTv4 is built to execute within a docker container on a GPU-enabled machine. The docker command creates an Ubuntu 16.04 image with CUDA 9.2, python 3.6, and conda.

  1. Clone this repository (e.g. to /yoltv4/).

  2. Download model weights to yoltv4/darknet/weights). See: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137 https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-csp.conv.142

  3. Install nvidia-docker.

  4. Build docker file.

     nvidia-docker build -t yoltv4_image /yoltv4/docker
    
  5. Spin up the docker container (see the docker docs for options).

     NV_GPU=0 nvidia-docker run -it -v /local_data:/local_data -v /yoltv4:/yoltv4 -ti --ipc=host --name yoltv4_gpu0 yoltv4_image
    
  6. Compile the Darknet C program.

    First Set GPU=1 CUDNN=1, CUDNN_HALF=1, OPENCV=1 in /yoltv4/darknet/Makefile, then make:

     cd /yoltv4/darknet
     make
    

1. Train

A. Prepare Data

  1. Make YOLO images and labels (see yoltv4/notebooks/train_test_pipeline.ipynb for further details).

  2. Create a txt file listing the training images.

  3. Create file obj.names file with each desired object name on its own line.

  4. Create file obj.data in the directory yoltv4/darknet/data containing necessary files. For example:

    /yoltv4/darknet/data/rareplanes_train.data

     classes = 30
     train =  /local_data/cosmiq/wdata/rareplanes/train/txt/train.txt
     valid =  /local_data/cosmiq/wdata/rareplanes/train/txt/valid.txt
     names =  /yoltv4/darknet/data/rareplanes.name
     backup = backup/
    
  5. Prepare config files.

    See instructions here, or tweak /yoltv4/darknet/cfg/yoltv4_rareplanes.cfg.

B. Execute Training

  1. Execute.

     cd /yoltv4/darknet
     time ./darknet detector train data/rareplanes_train.data  cfg/yoltv4_rareplanes.cfg weights/yolov4.conv.137  -dont_show -mjpeg_port 8090 -map
    
  2. Review progress (plotted at: /yoltv4/darknet/chart_yoltv4_rareplanes.png).


2. Test

A. Prepare Data

  1. Make sliced images (see yoltv4/notebooks/train_test_pipeline.ipynb for further details).

  2. Create a txt file listing the training images.

  3. Create file obj.data in the directory yoltv4/darknet/data containing necessary files. For example:

    /yoltv4/darknet/data/rareplanes_test.data classes = 30 train = valid = /local_data/cosmiq/wdata/rareplanes/test/txt/test.txt names = /yoltv4/darknet/data/rareplanes.name backup = backup/

B. Execute Testing

  1. Execute (proceeds at >80 frames per second on a Tesla P100):

     cd /yoltv4/darknet
     time ./darknet detector valid data/rareplanes_test.data cfg/yoltv4_rareplanes.cfg backup/ yoltv4_rareplanes_best.weights
    
  2. Post-process detections:

    A. Move detections into results directory

     mkdir /yoltv4/darknet/results/rareplanes_preds_v0
     mkdir  /yoltv4/darknet/results/rareplanes_preds_v0/orig_txt
     mv /yoltv4/darknet/results/comp4_det_test_*  /yoltv4/darknet/results/rareplanes_preds_v0/orig_txt/
    

    B. Stitch detections back together and make plots

     time python /yoltv4/yoltv4/post_process.py \
         --pred_dir=/yoltv4/darknet/results/rareplanes_preds_v0/orig_txt/ \
         --raw_im_dir=/local_data/cosmiq/wdata/rareplanes/test/images/ \
         --sliced_im_dir=/local_data/cosmiq/wdata/rareplanes/test/yoltv4/images_slice/ \
         --out_dir= /yoltv4/darknet/results/rareplanes_preds_v0 \
         --detection_thresh=0.25 \
         --slice_size=416} \
         --n_plots=8
    

Outputs will look something like the figures below:

Alt text

Alt text

Alt text

Owner
Adam Van Etten
Adam Van Etten
Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker

Example Of Fine-Tuning BERT For Named-Entity Recognition Task And Preparing For Cloud Deployment Using Flask, React, And Docker This repository contai

Nikita 12 Dec 14, 2022
Find the Heart simple Python Game

This is a simple Python game for finding a heart emoji. There is a 3 x 3 matrix in which a heart emoji resides. The location of the heart is randomized and is not revealed. The player must guess the

p.katekomol 1 Jan 24, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021)

mlp-mixer-pytorch PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021) Usage import torch from mlp_mixer

isaac 27 Jul 09, 2022
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://arxiv.org/abs/2112.0

Saman Khamesian 6 Dec 13, 2022
Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021)

TDEER 🦌 🦒 Official Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP2021) Overview TDEE

33 Dec 23, 2022
Training data extraction on GPT-2

Training data extraction from GPT-2 This repository contains code for extracting training data from GPT-2, following the approach outlined in the foll

Florian Tramer 62 Dec 07, 2022
RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues

RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues FGBG (foreground-background) pytorch package for defining and training model

Klaas Kelchtermans 1 Jun 02, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

37 Nov 21, 2022
A transformer-based method for Healthcare Image Captioning in Vietnamese

vieCap4H Challenge 2021: A transformer-based method for Healthcare Image Captioning in Vietnamese This repo GitHub contains our solution for vieCap4H

Doanh B C 4 May 05, 2022
CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

CIFS This repository provides codes for CIFS (ICML 2021). CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Sel

Hanshu YAN 19 Nov 12, 2022
A-ESRGAN aims to provide better super-resolution images by using multi-scale attention U-net discriminators.

A-ESRGAN: Training Real-World Blind Super-Resolution with Attention-based U-net Discriminators The authors are hidden for the purpose of double blind

77 Dec 16, 2022
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

39 Dec 17, 2022
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 06, 2023
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

105 Dec 23, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
Data Engineering ZoomCamp

Data Engineering ZoomCamp I'm partaking in a Data Engineering Bootcamp / Zoomcamp and will be tracking my progress here. I can't promise these notes w

Aaron 61 Jan 06, 2023