使用yolov5训练自己数据集(详细过程)并通过flask部署

Overview

使用yolov5训练自己的数据集(详细过程)并通过flask部署

依赖库

  • torch
  • torchvision
  • numpy
  • opencv-python
  • lxml
  • tqdm
  • flask
  • pillow
  • tensorboard
  • matplotlib
  • pycocotools

Windows,请使用 pycocotools-windows 代替 pycocotools

1.准备数据集

这里以PASCAL VOC数据集为例,提取码: 07wp 将获取的数据集放到datasets目录下 数据集结构如下:

---VOC2012
--------Annotations
---------------xml0
---------------xml1
--------JPEGImages
---------------img0
---------------img1
--------pascal_voc_classes.txt

Annotations为所有的xml文件,JPEGImages为所有的图片文件,pascal_voc_classes.txt为类别文件。

获取标签文件

yolo标签文件的格式如下:

102 0.682813 0.415278 0.237500 0.502778
102 0.914844 0.396528 0.168750 0.451389

第一位 label,为图片中物体的类别
后面四位为图片中物体的位置,(xcenter, ycenter, h, w)即目标物体中心位置的相对坐标和相对高宽
上图中存在两个目标

如果你已经拥有如上的label文件,可直接跳到下一步。 没有如上标签文件,可使用 labelimg 提取码 dbi2 打标签。生成xml格式的label文件,再转为yolo格式的label文件。labelimg的使用非常简单,在此不在赘述。

xml格式的label文件转为yolo格式:

python center/xml_yolo.py

pascal_voc_classes.txt,为你的类别对应的json文件。如下为voc数据集类别格式。

["aeroplane","bicycle", "bird","boat","bottle","bus","car","cat","chair","cow","diningtable","dog","horse","motorbike","person","pottedplant","sheep","sofa","train", "tvmonitor"]

运行上面代码后的路径结构

---VOC2012
--------Annotations
--------JPEGImages
--------pascal_voc_classes.json
---yolodata
--------images
--------labels

2.划分训练集和测试集

训练集和测试集的划分很简单,将原始数据打乱,然后按 9 :1划分为训练集和测试集即可。代码如下:

python center/get_train_val.py
运行上面代码会生成如下路径结构
---VOC2012
--------Annotations
--------JPEGImages
--------pascal_voc_classes.json
---yolodata
--------images
--------labels
---traindata
--------images
----------------train
----------------val
--------labels
----------------train
----------------val
traindata就是最后需要的训练文件

3. 训练模型

yolov5的训练很简单,本文已将代码简化,代码结构如下:

dataset             # 数据集
------traindata     # 训练数据集
inference           # 输入输出接口
------inputs        # 输入数据
------outputs       # 输出数据
config              # 配置文件
------score.yaml    # 训练配置文件
------yolov5l.yaml  # 模型配置文件
models              # 模型代码
runs	            # 日志文件
utils               # 代码文件
weights             # 模型保存路径,last.pt,best.pt
train.py            # 训练代码
detect.py           # 测试代码

score.yaml解释如下:

# train and val datasets (image directory)
train: ./datasets/traindata/images/train/
val: ./datasets/traindata/images/val/
# number of classes
nc: 2
# class names
names: ['苹果','香蕉']
  • train: 为图像数据的train,地址
  • val: 为图像数据的val,地址
  • nc: 为类别个数
  • names: 为类别对应的名称
yolov5l.yaml解释如下:
nc: 2 # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 1-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 2-P2/4
   [-1, 3, Bottleneck, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 4-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 6-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]], # 8-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 6, BottleneckCSP, [1024]],  # 10
  ]
head:
  [[-1, 3, BottleneckCSP, [1024, False]],  # 11
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 12 (P5/32-large)
   [-2, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 1, Conv, [512, 1, 1]],
   [-1, 3, BottleneckCSP, [512, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 17 (P4/16-medium)
   [-2, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 1, Conv, [256, 1, 1]],
   [-1, 3, BottleneckCSP, [256, False]],
   [-1, 1, nn.Conv2d, [na * (nc + 5), 1, 1, 0]],  # 22 (P3/8-small)
   [[], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]
  • nc:为目标类别个数
  • depth_multiple 和 width_multiple:控制模型深度和宽度。不同的参数对应:s,m,l,x 模型。
  • anchors: 为对输入的目标框通过k-means聚类产生的基础框,通过这个基础框去预测目标的box。
  • yolov5会自动产生anchors,yolov5采用欧氏距离进行k-means聚类,再使用遗传算法做一系列的变异得到最终的anchors。但是本人采用欧氏距离进行k-means聚类得到的效果不如采用 1 - iou进行k-means聚类的效果。如果想要 1 - iou 进行k-means聚类源码请私聊我。但是效果其实相差无几。
  • backbone: 为图像特征提取部分的网络结构。
  • head: 为最后的预测部分的网络结构

#####train.py配置十分简单: 在这里插入图片描述

我们仅需修改如下参数即可

epoch:         控制训练迭代的次数
batch_size     输入迭代的图片数量
cfg:           配置网络模型路径
data:          训练配置文件路径
weights:       载入模型,进行断点继续训练

终端运行(默认yolov5l)

 python train.py

即可开始训练。

训练过程

训练结果

4. 测试模型

需要需改三个参数
source:        需要检测的images/videos路径
out:		保存结果的路径
weights:       训练得到的模型权重文件的路径
你也可以使用在coco数据集上的权重文件进行测试将他们放到weights文件夹下

提取码:hhbb

终端运行

 python detect.py

即可开始检测。

测试结果

5.通过flask部署

flask的部署是非简单。如果有不明白的可以参考我之前的博客。

阿里云ECS部署python,flask项目,简单易懂,无需nginx和uwsgi

基于yolov3-deepsort-flask的目标检测和多目标追踪web平台

终端运行

 python app.py

即可开始跳转到网页,上传图片进行检测。

Owner
HB.com
HB.com
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
GPU-accelerated Image Processing library using OpenCL

pyclesperanto pyclesperanto is a python package for clEsperanto - a multi-language framework for GPU-accelerated image processing. clEsperanto uses Op

17 Dec 25, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index.

TechSEO Crawler Build a small, 3 domain internet using Github pages and Wikipedia and construct a crawler to crawl, render, and index. Play with the r

JR Oakes 57 Nov 24, 2022
PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)

About PyTorch 1.2.0 Now the master branch supports PyTorch 1.2.0 by default. Due to the serious version problem (especially torch.utils.data.dataloade

Sanghyun Son 2.1k Jan 01, 2023
Code for the Paper: Alexandra Lindt and Emiel Hoogeboom.

Discrete Denoising Flows This repository contains the code for the experiments presented in the paper Discrete Denoising Flows [1]. To give a short ov

Alexandra Lindt 3 Oct 09, 2022
Unpaired Caricature Generation with Multiple Exaggerations

CariMe-pytorch The official pytorch implementation of the paper "CariMe: Unpaired Caricature Generation with Multiple Exaggerations" CariMe: Unpaired

Gu Zheng 37 Dec 30, 2022
Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Understanding the Generalization Benefit of Model Invariance from a Data Perspective This is the code for our NeurIPS2021 paper "Understanding the Gen

1 Jan 15, 2022
TuckER: Tensor Factorization for Knowledge Graph Completion

TuckER: Tensor Factorization for Knowledge Graph Completion This codebase contains PyTorch implementation of the paper: TuckER: Tensor Factorization f

Ivana Balazevic 296 Dec 06, 2022
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
Motion planning environment for Sampling-based Planners

Sampling-Based Motion Planners' Testing Environment Sampling-based motion planners' testing environment (sbp-env) is a full feature framework to quick

Soraxas 23 Aug 23, 2022
This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Motion .

ROSEFusion 🌹 This project is based on our SIGGRAPH 2021 paper, ROSEFusion: Random Optimization for Online DenSE Reconstruction under Fast Camera Moti

219 Dec 27, 2022
Official PyTorch code for Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021)

Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling (HCFlow, ICCV2021) This repository is the official P

Jingyun Liang 159 Dec 30, 2022
Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation

SUO-SLAM This repository hosts the code for our CVPR 2022 paper "Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation". ArXiv li

Robot Perception & Navigation Group (RPNG) 97 Jan 03, 2023
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization

NLopt is a library for nonlinear local and global optimization, for functions with and without gradient information. It is designed as a simple, unifi

Steven G. Johnson 1.4k Dec 25, 2022
Minimal fastai code needed for working with pytorch

fastai_minima A mimal version of fastai with the barebones needed to work with Pytorch #all_slow Install pip install fastai_minima How to use This lib

Zachary Mueller 14 Oct 21, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
Implementation of a Transformer, but completely in Triton

Transformer in Triton (wip) Implementation of a Transformer, but completely in Triton. I'm completely new to lower-level neural net code, so this repo

Phil Wang 152 Dec 22, 2022
Rethinking Nearest Neighbors for Visual Classification

Rethinking Nearest Neighbors for Visual Classification arXiv Environment settings Check out scripts/env_setup.sh Setup data Download the following fin

Menglin Jia 29 Oct 11, 2022