a test times augmentation toolkit based on paddle2.0.

Overview

Patta

Image Test Time Augmentation with Paddle2.0!

           Input
             |           # input batch of images 
        / / /|\ \ \      # apply augmentations (flips, rotation, scale, etc.)
       | | | | | | |     # pass augmented batches through model
       | | | | | | |     # reverse transformations for each batch of masks/labels
        \ \ \ / / /      # merge predictions (mean, max, gmean, etc.)
             |           # output batch of masks/labels
           Output

Table of Contents

  1. Quick Start
  1. Transforms
  2. Aliases
  3. Merge modes
  4. Installation

Quick start (Default Transforms)

Test

We support that you can use the following to test after defining the network.

Segmentation model wrapping [docstring]:
import patta as tta
tta_model = tta.SegmentationTTAWrapper(model, tta.aliases.d4_transform(), merge_mode='mean')
Classification model wrapping [docstring]:
tta_model = tta.ClassificationTTAWrapper(model, tta.aliases.five_crop_transform())
Keypoints model wrapping [docstring]:
tta_model = tta.KeypointsTTAWrapper(model, tta.aliases.flip_transform(), scaled=True)

Note: the model must return keypoints in the format Tensor([x1, y1, ..., xn, yn])

Predict

We support that you can use the following to test when you have the static model: *.pdmodel*.pdiparams*.pdiparams.info.

Load model [docstring]:
import patta as tta
model = tta.load_model(path='output/model')
Segmentation model wrapping [docstring]:
tta_model = tta.SegmentationTTAWrapper(model, tta.aliases.d4_transform(), merge_mode='mean')
Classification model wrapping [docstring]:
tta_model = tta.ClassificationTTAWrapper(model, tta.aliases.five_crop_transform())
Keypoints model wrapping [docstring]:
tta_model = tta.KeypointsTTAWrapper(model, tta.aliases.flip_transform(), scaled=True)

Use-Tools

Segmentation model [docstring]:

We recommend modifying the file seg.py according to your own model.

python seg.py --model_path='output/model' \
                 --batch_size=16 \
                 --test_dataset='test.txt'

Note: Related to paddleseg

Advanced-Examples (DIY Transforms)

Custom transform:
# defined 2 * 2 * 3 * 3 = 36 augmentations !
transforms = tta.Compose(
    [
        tta.HorizontalFlip(),
        tta.Rotate90(angles=[0, 180]),
        tta.Scale(scales=[1, 2, 4]),
        tta.Multiply(factors=[0.9, 1, 1.1]),        
    ]
)

tta_model = tta.SegmentationTTAWrapper(model, transforms)
Custom model (multi-input / multi-output)
# Example how to process ONE batch on images with TTA
# Here `image`/`mask` are 4D tensors (B, C, H, W), `label` is 2D tensor (B, N)

for transformer in transforms: # custom transforms or e.g. tta.aliases.d4_transform() 
    
    # augment image
    augmented_image = transformer.augment_image(image)
    
    # pass to model
    model_output = model(augmented_image, another_input_data)
    
    # reverse augmentation for mask and label
    deaug_mask = transformer.deaugment_mask(model_output['mask'])
    deaug_label = transformer.deaugment_label(model_output['label'])
    
    # save results
    labels.append(deaug_mask)
    masks.append(deaug_label)
    
# reduce results as you want, e.g mean/max/min
label = mean(labels)
mask = mean(masks)

Optional Transforms

Transform Parameters Values
HorizontalFlip - -
VerticalFlip - -
Rotate90 angles List[0, 90, 180, 270]
Scale scales
interpolation
List[float]
"nearest"/"linear"
Resize sizes
original_size
interpolation
List[Tuple[int, int]]
Tuple[int,int]
"nearest"/"linear"
Add values List[float]
Multiply factors List[float]
FiveCrops crop_height
crop_width
int
int

Aliases (Combos)

  • flip_transform (horizontal + vertical flips)
  • hflip_transform (horizontal flip)
  • d4_transform (flips + rotation 0, 90, 180, 270)
  • multiscale_transform (scale transform, take scales as input parameter)
  • five_crop_transform (corner crops + center crop)
  • ten_crop_transform (five crops + five crops on horizontal flip)

Merge-modes

Installation

PyPI:

# After downloading the whole dir
$ git clone https://github.com/AgentMaker/PaTTA.git
$ pip install PaTTA/

# or

$ pip install git+https://github.com/AgentMaker/PaTTA.git

Run tests

# run test_transforms.py and test_base.py for test
python test/test_transforms.py
python test/test_base.py
Comments
  • preprocess issue

    preprocess issue

    issue 1

    当我将crop_size调至(1024,512), 报错

    Traceback (most recent call last): File "PaTTA/tools/seg.py", line 41, in main(args.batch_size, imgs_list, args.crop_size) File "PaTTA/tools/seg.py", line 26, in main tensor_img = tta_model(tensor_img) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/patta/wrappers.py", line 39, in forward augmented_output = self.model(augmented_image, *args)[0] File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/io.py", line 1170, in i_m_p_l return _run_dygraph(self, input, program_holder) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/io.py", line 733, in _run_dygraph 'is_test': instance._is_test File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/tracer.py", line 45, in trace_op not stop_gradient) ValueError: (InvalidArgument) Broadcast dimension mismatch. Operands could not be broadcast together with the shape of X = [16, 48, 128, 256] and the shape of Y = [16, 48, 384, 384]. Received [128] in X is not equal to [384] in Y at i:2. [Hint: Expected x_dims_array[i] == y_dims_array[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1 == true, but received x_dims_array[i] == y_dims_array[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1:0 != true:1.] (at /paddle/paddle/fluid/operators/elementwise/elementwise_op_function.h:160) [operator < elementwise_add > error] [operator < run_program > error]

    事实上修改任意crop_size都报错,但是改为1536,1536即数据集的图片尺寸,上述错误解决,但是issue2出现

    issue 2

    Traceback (most recent call last): File "PaTTA/tools/seg.py", line 41, in main(args.batch_size, imgs_list, args.crop_size) File "PaTTA/tools/seg.py", line 26, in main tensor_img = tta_model(tensor_img) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/patta/wrappers.py", line 39, in forward augmented_output = self.model(augmented_image, *args)[0] File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 902, in call outputs = self.forward(*inputs, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/io.py", line 1170, in i_m_p_l return _run_dygraph(self, input, program_holder) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/io.py", line 733, in _run_dygraph 'is_test': instance._is_test File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/tracer.py", line 45, in trace_op not stop_gradient) ValueError: (InvalidArgument) The 'shape' in ReshapeOp is invalid. The input tensor X'size must be equal to the capacity of 'shape'. But received X's shape = [16, 512, 384, 384], X's size = 1207959552, 'shape' is [1, 512, 147456], the capacity of 'shape' is 75497472. [Hint: Expected capacity == in_size, but received capacity:75497472 != in_size:1207959552.] (at /paddle/paddle/fluid/operators/reshape_op.cc:222) [operator < reshape2 > error] [operator < run_program > error]

    opened by CoderChen01 10
  • 修复各种测试,并利用 GitHub Actions 自动化测试

    修复各种测试,并利用 GitHub Actions 自动化测试

    貌似原来测试是跑不起来的,有些还是 pytorch 的代码,因此修复了下测试,并添加了 CI 配置以自动化测试。

    另外 Resize 代码也是跑不起来的,原因是 paddle 里参数 align_corners 应当只是 bool,不允许是 None,因此对 transform 和 functional 里的代码也做了少许调整。

    opened by SigureMo 4
  • [PaddlePaddle Hackathon] add image augment algorithms

    [PaddlePaddle Hackathon] add image augment algorithms

    • Task: https://github.com/AgentMaker/PaTTA/issues/5
    • Description: 新增不低于5个图像方向的数据增强算法,并且这些算法能够略微、显著提升推理成绩,以提升 PaTTA 可用性

    • [x] 算法 * 7
      • HorizontalShift 水平平移(DualTransform)
      • VerticalShift 竖直平移(DualTransform)
      • AdjustContrast 调节图片对比度(ImageOnlyTransform)
      • AdjustBrightness 调节图片亮度(ImageOnlyTransform)
      • AverageBlur 均值滤波(ImageOnlyTransform)
      • GaussianBlur 高斯滤波(ImageOnlyTransform)
      • Sharpen 锐化(ImageOnlyTransform)
    • [x] 文档(README + docstring)
    • [x] 单元测试
    • [x] AI Studio 自测(部分公开,有效期三天):https://aistudio.baidu.com/studio/project/partial/verify/2586123/9bf6d33c51e34ff1984273b17488dc8b

    以上所有算法均使用批处理方式进行,避免在 Python 中调用低效的 for 循环,其中后三种滤波方式使用 paddle.nn.functional.conv2d 实现,边缘直接使用 pad 0 后卷积,未作 OpenCV 中的那些特殊处理,但非边缘部分处理效果与直接调用 OpenCV 效果一致~

    PaddlePaddle Hackathon 
    opened by SigureMo 2
  • 【PaddlePaddle Hackathon】97 新增图像数据增强算法

    【PaddlePaddle Hackathon】97 新增图像数据增强算法

    (此 ISSUE 为 PaddlePaddle Hackathon 活动的任务 ISSUE,更多详见PaddlePaddle Hackathon

    PaTTA 是一个致力于让模型表现更加稳定的飞桨模型测试增强工具箱,其原理为在测试时对要推理的数据进行增强,通过投票形式选出更稳健的推理结果。

    【任务说明】

    • 任务标题:新增图像数据增强算法

    • 技术标签:Python

    • 任务难度:简单

    • 详细描述:数据增强是一种比较有效的模型能力提升方式,更多的组合可使得模型在训练时更加关注目标特征,从而进一步提升模型成绩。目前 PaTTA 中仅具备高频的图像数据增强算法。本这个项目,需要你新增不低于5个图像方向的数据增强算法,并且这些算法能够略微、显著提升推理成绩,以提升 PaTTA 可用性。

    【提交内容】

    • 项目 PR 到 PaTTA

    • 技术说明文档

    【项目技术要求】

    • 具有基础的 Python 开发能力

    • 有过在深度学习中使用图像增强的经历

    PaddlePaddle Hackathon 
    opened by GT-ZhangAcer 2
  • 【PaddlePaddle Hackathon】AgentMaker 任务合集

    【PaddlePaddle Hackathon】AgentMaker 任务合集

    Hi,大家好,非常高兴的告诉大家,首届 PaddlePaddle Hackathon 开始啦。PaddlePaddle Hackathon 是面向全球开发者的深度学习领域编程活动,鼓励开发者了解与参与 PaddlePaddle。本次共有四大方向(PaddlePaddle、Paddle Family、Paddle Friends、Paddle Anything)四大方向,共计100个任务共大家完成。详细信息可以参考 PaddlePaddle Hackathon 说明。大家是否已经迫不及待了呢~

    本 ISSUE 是 Paddle Friends 专区 AgentMaker 方向任务合集。具体任务列表如下:

    | 序号 | 难度 | 任务 ISSUE | | ---- | ---- | --------------------------------------------------------- | | 96 | ⭐️ | 【PaddlePaddle Hackathon】96 图像分类模型解释性可视化探究 | | 97 | ⭐️ | 【PaddlePaddle Hackathon】97 新增图像数据增强算法 | | 98 | ⭐️ | 【PaddlePaddle Hackathon】98 搜索测试图像增强最佳方案探索 | | 99 | ⭐️ | 【PaddlePaddle Hackathon】99 为 AgentOCR 工具适配 JavaScript 环境 | | 100 | ⭐️ ⭐️ | 【PaddlePaddle Hackathon】100 制作 Rubick 深度学习相关小插件 |

    若想要认领本次活动任务,请至 PaddlePaddle Hackathon Pinned ISSUE 完成活动报名以及任务认领。

    活动官网:PaddlePaddle Hackathon

    PaddlePaddle Hackathon 
    opened by GT-ZhangAcer 0
  • 【PaddlePaddle Hackathon】98 搜索测试图像增强最佳方案探索

    【PaddlePaddle Hackathon】98 搜索测试图像增强最佳方案探索

    (此 ISSUE 为 PaddlePaddle Hackathon 活动的任务 ISSUE,更多详见PaddlePaddle Hackathon

    PaTTA 就是一个致力于让模型表现更加稳定的飞桨模型测试增强工具箱,其原理为在测试时对要推理的数据进行增强,通过投票形式选出更稳健的推理结果。

    【任务说明】

    • 任务标题:搜索测试图像增强最佳方案探索

    • 技术标签:Python、PaddlePaddle

    • 任务难度:简单

    • 详细描述:在一般的深度学习赛事中,模型融合、TTA 等策略虽然能有效提升选手成绩,但这些方案在性能上往往难以应用于真实场景。虽然 PaTTA 提供了 TTA 工具,但我们也可以思考是否可以通过统计等方式,在用户预测单张图像时尽可能推荐出一个推理性能均衡点,在较低的速度影响下依旧可以提升模型效果。在这个项目中,需要你在同样环境下,在 Cifar100 数据集上进行推理,做到速度影响在 5% 以内,精度仍可具备至少 0.1% 的提升。

    【提交内容】

    • 项目 PR 到 PaTTA

    • 技术说明文档

    【技术要求】

    • 可跑通 PaddlePaddle 核心框架下任一图像分类任务
    PaddlePaddle Hackathon 
    opened by GT-ZhangAcer 0
  • 【PaddlePaddle Hackathon】96 图像分类模型解释性可视化探究

    【PaddlePaddle Hackathon】96 图像分类模型解释性可视化探究

    (此 ISSUE 为 PaddlePaddle Hackathon 活动的任务 ISSUE,更多详见PaddlePaddle Hackathon

    PaTTA 是一个致力于让模型表现更加稳定的飞桨模型测试增强工具箱。

    【任务说明】

    • 任务标题:图像分类模型解释性可视化探究

    • 技术标签:PaTTA、Python、PaddlePaddle

    • 任务难度:简单

    详细描述:深度学习模型在结构上很难具备“可解释”能力,然而这并不影响我们通过梯度、噪音等方式去解释模型到底在关注什么,也就意味着我们在一些比赛中也可以从通过该方式来了解模型的“关注点”从而提升比赛成绩。

    在这个任务中,你需要从产品设计出发,也可以考虑如何优化可解释型算法,目的是将解释性工具箱 InterpretDL 或者自己实现的可解释性模块加入 PaTTA 工具箱中,为模型分析提供更多可能,使得用户在使用 PaTTA 工具箱进行推理结果增强时,可以通过简单的方式调用可视化解释性功能,向使用者提供解释性分析情况。

    PaTTA 主页:https://github.com/AgentMaker/PaTTA

    InterpretDL 主页:https://github.com/PaddlePaddle/InterpretDL

    【提交内容】

    • 项目 PR 到 PaTTA
    • 技术说明文档

    【技术要求】

    • 具有基础的 Python 开发能力

    • 有使用 Matplotlib 或 OpenCV 等任一 Python 图像库的使用经历

    PaddlePaddle Hackathon 
    opened by GT-ZhangAcer 0
Releases(0.0.2)
Owner
AgentMaker
Mainly focus on reinforcement learning and deep learning for point clouds
AgentMaker
Transformation spoken text to written text

Transformation spoken text to written text This model is used for formatting raw asr text output from spoken text to written text (Eg. date, number, i

Nguyen Binh 16 Dec 28, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Chinese NewsTitle Generation Project by GPT2.带有超级详细注释的中文GPT2新闻标题生成项目。

GPT2-NewsTitle 带有超详细注释的GPT2新闻标题生成项目 UpDate 01.02.2021 从网上收集数据,将清华新闻数据、搜狗新闻数据等新闻数据集,以及开源的一些摘要数据进行整理清洗,构建一个较完善的中文摘要数据集。 数据集清洗时,仅进行了简单地规则清洗。

logCong 785 Dec 29, 2022
This is a project of data parallel that running on NLP tasks.

This is a project of data parallel that running on NLP tasks.

2 Dec 12, 2021
Pytorch implementation of Tacotron

Tacotron-pytorch A pytorch implementation of Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model. Requirements Install python 3 Install pytorc

soobin seo 203 Dec 02, 2022
News-Articles-and-Essays - NLP (Topic Modeling and Clustering)

NLP T5 Project proposal Topic Modeling and Clustering of News-Articles-and-Essays Students: Nasser Alshehri Abdullah Bushnag Abdulrhman Alqurashi OVER

2 Jan 18, 2022
Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech

epub2audiobook Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech Input examples qual a pasta do seu

7 Aug 25, 2022
Reading Wikipedia to Answer Open-Domain Questions

DrQA This is a PyTorch implementation of the DrQA system described in the ACL 2017 paper Reading Wikipedia to Answer Open-Domain Questions. Quick Link

Facebook Research 4.3k Jan 01, 2023
Poetry PEP 517 Build Backend & Core Utilities

Poetry Core A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-containe

Poetry 293 Jan 02, 2023
In this project, we aim to achieve the task of predicting emojis from tweets. We aim to investigate the relationship between words and emojis.

Making Emojis More Predictable by Karan Abrol, Karanjot Singh and Pritish Wadhwa, Natural Language Processing (CSE546) under the guidance of Dr. Shad

Karanjot Singh 2 Jan 17, 2022
Biterm Topic Model (BTM): modeling topics in short texts

Biterm Topic Model Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actua

Maksim Terpilowski 49 Dec 30, 2022
Repository for fine-tuning Transformers 🤗 based seq2seq speech models in JAX/Flax.

Seq2Seq Speech in JAX A JAX/Flax repository for combining a pre-trained speech encoder model (e.g. Wav2Vec2, HuBERT, WavLM) with a pre-trained text de

Sanchit Gandhi 21 Dec 14, 2022
A simple visual front end to the Maya UE4 RBF plugin delivered with MetaHumans

poseWrangler Overview PoseWrangler is a simple UI to create and edit pose-driven relationships in Maya using the MayaUE4RBF plugin. This plugin is dis

Christopher Evans 105 Dec 18, 2022
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
Twitter-NLP-Analysis - Twitter Natural Language Processing Analysis

Twitter-NLP-Analysis Business Problem I got last @turk_politika 3000 tweets with

Çağrı Karadeniz 7 Mar 12, 2022
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022
NLP techniques such as named entity recognition, sentiment analysis, topic modeling, text classification with Python to predict sentiment and rating of drug from user reviews.

This file contains the following documents sumbited for Baruch CIS9665 group 9 fall 2021. 1. Dataset: drug_reviews.csv 2. python codes for text classi

Aarif Munwar Jahan 2 Jan 04, 2023
Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing

Token Shift GPT Implementation of Token Shift GPT - An autoregressive model that relies solely on shifting along the sequence dimension and feedforwar

Phil Wang 32 Oct 14, 2022
Extract rooms type, door, neibour rooms, rooms corners nad bounding boxes, and generate graph from rplan dataset

Housegan-data-reader House-GAN++ (data-reader) Code and instructions for converting rplan dataset (raster images) to housegan++ data format. House-GAN

Sepid Hosseini 13 Nov 24, 2022
Switch spaces for knowledge graph embeddings

SwisE Switch spaces for knowledge graph embeddings. Requirements: python3 pytorch numpy tqdm Reproduce the results To reproduce the reported results,

Shuai Zhang 4 Dec 01, 2021