使用yolov5训练自己数据集(详细过程)并通过flask部署

Overview

使用yolov5训练自己的数据集(详细过程)并通过flask部署

依赖库

  • torch
  • torchvision
  • numpy
  • opencv-python
  • lxml
  • tqdm
  • flask
  • pillow
  • tensorboard
  • matplotlib
  • pycocotools

Windows,请使用 pycocotools-windows 代替 pycocotools

1.准备数据集

这里以PASCAL VOC数据集为例,提取码: 07wp 将获取的数据集放到datasets目录下 数据集结构如下:

---VOC2012
--------Annotations
---------------xml0
---------------xml1
--------JPEGImages
---------------img0
---------------img1
--------pascal_voc_classes.txt

Annotations为所有的xml文件,JPEGImages为所有的图片文件,pascal_voc_classes.txt为类别文件。

获取标签文件

yolo标签文件的格式如下:

102 0.682813 0.415278 0.237500 0.502778
102 0.914844 0.396528 0.168750 0.451389

第一位 label,为图片中物体的类别
后面四位为图片中物体的位置,(xcenter, ycenter, h, w)即目标物体中心位置的相对坐标和相对高宽
上图中存在两个目标

如果你已经拥有如上的label文件,可直接跳到下一步。 没有如上标签文件,可使用 labelimg 提取码 dbi2 打标签。生成xml格式的label文件,再转为yolo格式的label文件。labelimg的使用非常简单,在此不在赘述。

xml格式的label文件转为yolo格式:

python center/xml_yolo.py

pascal_voc_classes.txt,为你的类别对应的json文件。如下为voc数据集类别格式。

["aeroplane","bicycle", "bird","boat","bottle","bus","car","cat","chair","cow","diningtable","dog","horse","motorbike","person","pottedplant","sheep","sofa","train", "tvmonitor"]

运行上面代码后的路径结构

---VOC2012
--------Annotations
--------JPEGImages
--------pascal_voc_classes.json
---yolodata
--------images
--------labels

2.划分训练集和测试集

训练集和测试集的划分很简单,将原始数据打乱,然后按 9 :1划分为训练集和测试集即可。代码如下:

python center/get_train_val.py
运行上面代码会生成如下路径结构
---VOC2012
--------Annotations
--------JPEGImages
--------pascal_voc_classes.json
---yolodata
--------images
--------labels
---traindata
--------images
----------------train
----------------val
--------labels
----------------train
----------------val
traindata就是最后需要的训练文件

3. 训练模型

yolov5的训练很简单,本文已将代码简化,代码结构如下:

dataset             # 数据集
------traindata     # 训练数据集
inference           # 输入输出接口
------inputs        # 输入数据
------outputs       # 输出数据
config              # 配置文件
------score.yaml    # 训练配置文件
------yolov5l.yaml  # 模型配置文件
models              # 模型代码
runs	            # 日志文件
utils               # 代码文件
weights             # 模型保存路径,last.pt,best.pt
train.py            # 训练代码
detect.py           # 测试代码

score.yaml解释如下:

# train and val datasets (image directory)
train: ./datasets/traindata/images/train/
val: ./datasets/traindata/images/val/
# number of classes
nc: 2
# class names
names: ['苹果','香蕉']
  • train: 为图像数据的train,地址
  • val: 为图像数据的val,地址
  • nc: 为类别个数
  • names: 为类别对应的名称
yolov5l.yaml解释如下:
nc: 2 # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 1-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 2-P2/4
   [-1, 3, Bottleneck, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 4-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 6-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]], # 8-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 6, BottleneckCSP, [1024]],  # 10
  ]
head:
  [[-1, 3, BottleneckCSP, [1024, False]],  # 11
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 12 (P5/32-large)
   [-2, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 1, Conv, [512, 1, 1]],
   [-1, 3, BottleneckCSP, [512, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 17 (P4/16-medium)
   [-2, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 1, Conv, [256, 1, 1]],
   [-1, 3, BottleneckCSP, [256, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 22 (P3/8-small)
   [[], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]
  • nc:为目标类别个数
  • depth_multiple 和 width_multiple:控制模型深度和宽度。不同的参数对应:s,m,l,x 模型。
  • anchors: 为对输入的目标框通过k-means聚类产生的基础框,通过这个基础框去预测目标的box。
  • yolov5会自动产生anchors,yolov5采用欧氏距离进行k-means聚类,再使用遗传算法做一系列的变异得到最终的anchors。但是本人采用欧氏距离进行k-means聚类得到的效果不如采用 1 - iou进行k-means聚类的效果。如果想要 1 - iou 进行k-means聚类源码请私聊我。但是效果其实相差无几。
  • backbone: 为图像特征提取部分的网络结构。
  • head: 为最后的预测部分的网络结构

#####train.py配置十分简单: 在这里插入图片描述

我们仅需修改如下参数即可

epoch:         控制训练迭代的次数
batch_size     输入迭代的图片数量
cfg:           配置网络模型路径
data:          训练配置文件路径
weights:       载入模型,进行断点继续训练

终端运行(默认yolov5l)

 python train.py

即可开始训练。

训练过程

训练结果

4. 测试模型

需要需改三个参数
source:        需要检测的images/videos路径
out:		保存结果的路径
weights:       训练得到的模型权重文件的路径
你也可以使用在coco数据集上的权重文件进行测试将他们放到weights文件夹下

提取码:hhbb

终端运行

 python detect.py

即可开始检测。

测试结果

5.通过flask部署

flask的部署是非简单。如果有不明白的可以参考我之前的博客。

阿里云ECS部署python,flask项目,简单易懂,无需nginx和uwsgi

基于yolov3-deepsort-flask的目标检测和多目标追踪web平台

终端运行

 python app.py

即可开始跳转到网页,上传图片进行检测。

Owner
HB.com
HB.com
Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification

Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification

258 Dec 29, 2022
Recurrent Variational Autoencoder that generates sequential data implemented with pytorch

Pytorch Recurrent Variational Autoencoder Model: This is the implementation of Samuel Bowman's Generating Sentences from a Continuous Space with Kim's

Daniil Gavrilov 347 Nov 14, 2022
An official PyTorch implementation of the TKDE paper "Self-Supervised Graph Representation Learning via Topology Transformations".

Self-Supervised Graph Representation Learning via Topology Transformations This repository is the official PyTorch implementation of the following pap

Hsiang Gao 2 Oct 31, 2022
A Small and Easy approach to the BraTS2020 dataset (2D Segmentation)

BraTS2020 A Light & Scalable Solution to BraTS2020 | Medical Brain Tumor Segmentation (2D Segmentation) Developed the segmentation models for segregat

Gunjan Haldar 0 Jan 19, 2022
code for EMNLP 2019 paper Text Summarization with Pretrained Encoders

PreSumm This code is for EMNLP 2019 paper Text Summarization with Pretrained Encoders Updates Jan 22 2020: Now you can Summarize Raw Text Input!. Swit

Yang Liu 1.2k Dec 28, 2022
Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks Work accepted at NeurIPS'21 [paper, video]. If you use this code in

TU Delft 43 Dec 07, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

1 Nov 27, 2021
Source code for 2021 ICCV paper "In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces"

In-the-Wild Single Camera 3D Reconstruction Through Moving Water Surfaces This is the PyTorch implementation for 2021 ICCV paper "In-the-Wild Single C

27 Dec 06, 2022
Code for binary and multiclass model change active learning, with spectral truncation implementation.

Model Change Active Learning Paper (To Appear) Python code for doing active learning in graph-based semi-supervised learning (GBSSL) paradigm. Impleme

Kevin Miller 1 Jul 24, 2022
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022
PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021.

PAML PyTorch implementation of the paper: "Preference-Adaptive Meta-Learning for Cold-Start Recommendation", IJCAI, 2021. (Continuously updating ) Int

15 Nov 18, 2022
A Distributional Approach To Controlled Text Generation

A Distributional Approach To Controlled Text Generation This is the repository code for the ICLR 2021 paper "A Distributional Approach to Controlled T

NAVER 102 Jan 07, 2023
Semantic Segmentation with Pytorch-Lightning

This is a simple demo for performing semantic segmentation on the Kitti dataset using Pytorch-Lightning and optimizing the neural network by monitoring and comparing runs with Weights & Biases.

Boris Dayma 58 Nov 18, 2022
Replication of Pix2Seq with Pretrained Model

Pretrained-Pix2Seq We provide the pre-trained model of Pix2Seq. This version contains new data augmentation. The model is trained for 300 epochs and c

peng gao 51 Nov 22, 2022
Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP.

Hire-Wave-MLP.pytorch Implementation of Hire-MLP: Vision MLP via Hierarchical Rearrangement and An Image Patch is a Wave: Phase-Aware Vision MLP Resul

Nevermore 29 Oct 28, 2022
Deploy a ML inference service on a budget in less than 10 lines of code.

BudgetML is perfect for practitioners who would like to quickly deploy their models to an endpoint, but not waste a lot of time, money, and effort trying to figure out how to do this end-to-end.

1.3k Dec 25, 2022
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
Implementation of the paper "Fine-Tuning Transformers: Vocabulary Transfer"

Transformer-vocabulary-transfer Implementation of the paper "Fine-Tuning Transfo

LEYA 13 Nov 30, 2022