Transformers and related deep network architectures are summarized and implemented here.

Overview

Transformers: from NLP to CV

cover

This is a practical introduction to Transformers from Natural Language Processing (NLP) to Computer Vision (CV)

  1. Introduction
  2. ViT: Transformers for Computer Vision
  3. Visualizing the attention Open In Colab
  4. MLP-Mixer Open In Colab
  5. Hybrid MLP-Mixer + ViT Open In Colab
  6. ConvMixer Open In Colab
  7. Hybrid ConvMixer + MLP-Mixer Open In Colab

1) Introduction

What is wrong with RNNs and CNNs

Learning Representations of Variable Length Data is a basic building block of sequence-to-sequence learning for Neural machine translation, summarization, etc

  • Recurrent Neural Networks (RNNs) are natural fit variable-length sentences and sequences of pixels. But sequential computation inhibits parallelization. No explicit modeling of long and short-range dependencies.
  • Convolutional Neural Networks (CNNs) are trivial to parallelize (per layer) and exploit local dependencies. However, long-distance dependencies require many layers.

Attention!

The Transformer archeticture was proposed in the paper Attention is All You Need. As mentioned in the paper:

"We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely"

"Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train"

Machine Translation (MT) is the task of translating a sentence x from one language (the source language) to a sentence y in another language (the target language). One basic and well known neural network architecture for NMT is called sequence-to-sequence seq2seq and it involves two RNNs.

  • Encoder: RNN network that encodes the input sequence to a single vector (sentence encoding)
  • Decoder: RNN network that generates the output sequences conditioned on the encoder's output. (conditioned language model)

seqseq

The problem of the vanilla seq2seq is information bottleneck, where the encoding of the source sentence needs to capture all information about it in one vector.

As mentioned in the paper Neural Machine Translation by Jointly Learning to Align and Translate

"A potential issue with this encoder–decoder approach is that a neural network needs to be able to compress all the necessary information of a source sentence into a fixed-length vector. This may make it difficult for the neural network to cope with long sentences, especially those that are longer than the sentences in the training corpus."

attention001.gif

Attention provides a solution to the bottleneck problem

  • Core idea: on each step of the decoder, use a direct connection to the encoder to focus on a particular part of the source sequence. Attention is basically a technique to compute a weighted sum of the values (in the encoder), dependent on another value (in the decoder).

The main idea of attention can be summarized as mention the OpenAi's article:

"... every output element is connected to every input element, and the weightings between them are dynamically calculated based upon the circumstances, a process called attention."

Query and Values

  • In the seq2seq + attention model, each decoder hidden state (query) attends to all the encoder hidden states (values)
  • The weighted sum is a selective summary of the information contained in the values, where the query determines which values to focus on.
  • Attention is a way to obtain a fixed-size representation of an arbitrary set of representations (the values), dependent on some other representation (the query).

2) Transformers for Computer Vision

Transfomer based architectures were used not only for NLP but also for computer vision tasks. One important example is Vision Transformer ViT that represents a direct application of Transformers to image classification, without any image-specific inductive biases. As mentioned in the paper:

"We show that reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks"

"Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks"

vit

As we see, an input image is splitted into patches which are treated the same way as tokens (words) in an NLP application. Position embeddings are added to the patch embeddings to retain positional information. Similar to BERT’s class token, a classification head is attached here and used during pre-training and fine-tuning. The model is trained on image classification in supervised fashion.

Multi-head attention

The intuition is similar to have a multi-filter in CNNs. Here we can have multi-head attention, to give the network more capacity and ability to learn different attention patterns. By having multiple different layers that generate (or project) the vectors of queries, keys and values, we can learn multiple representations of these queries, keys and values.

mha

Where each token is projected (in a learnable way) into three vecrors Q, K, and V:

  • Q: Query vector: What I want
  • K: Key vector: What type of info I have
  • V: Value vector: What actual info I have

3) Visualizing the attention

Open In Colab

The basic ViT architecture is used, however with only one transformer layer with one (or four) head(s) for simplicity. The model is trained on CIFAR-10 classification task. The image is splitted in to 12 x 12 = 144 patches as usual, and after training, we can see the 144 x 144 attention scores (where each patch can attend to the others).

imgpatches

Attention map represents the correlation (attention) between all the tokens, where the sum of each row equals 1 representing the probability distribution of attention from a query patch to all others.

attmap

Long distance attention we can see two interesting patterns where background patch attends to long distance other background patches, and this flight patch attends to long distance other flight patches.

attpattern

We can try more heads and more transfomer layers and inspect the attention patterns.

attanim


4) MLP-Mixer

Open In Colab

MLP-Mixer is proposed in the paper An all-MLP Architecture for Vision. As mentioned in the paper:

"While convolutions and attention are both sufficient for good performance, neither of them is necessary!"

"Mixer is a competitive but conceptually and technically simple alternative, that does not use convolutions or self-attention"

Mixer accepts a sequence of linearly projected image patches (tokens) shaped as a “patches × channels” table as an input, and maintains this dimensionality. Mixer makes use of two types of MLP layers:

mixer

  • Channel-mixing MLPs allow communication between different channels, they operate on each token independently and take individual rows of the table as inputs
  • Token-mixing MLPs allow communication between different spatial locations (tokens); they operate on each channel independently and take individual columns of the table as inputs.

These two types of layers are interleaved to enable interaction of both input dimensions.

"The computational complexity of the network is linear in the number of input patches, unlike ViT whose complexity is quadratic"

"Unlike ViTs, Mixer does not use position embeddings"

It is commonly observed that the first layers of CNNs tend to learn detectors that act on pixels in local regions of the image. In contrast, Mixer allows for global information exchange in the token-mixing MLPs.

"Recall that the token-mixing MLPs allow global communication between different spatial locations."

vizmixer

The figure shows hidden units of the four token-mixing MLPs of Mixer trained on CIFAR10 dataset.


5) Hybrid MLP-Mixer and ViT

Open In Colab

We can use both the MLP-Mixer and ViT in one network architecture to get the best of both worlds.

mixvit

Adding a few self-attention sublayers to mixer is expected to offer a simple way to trade off speed for accuracy.


6) CovMixer

Open In Colab

Patches Are All You Need?

Is the performance of ViTs due to the inherently more powerful Transformer architecture, or is it at least partly due to using patches as the input representation.

ConvMixer, an extremely simple model that is similar in many aspects to the ViT and the even-more-basic MLP-Mixer

Despite its simplicity, ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet.

While self-attention and MLPs are theoretically more flexible, allowing for large receptive fields and content-aware behavior, the inductive bias of convolution is well-suited to vision tasks and leads to high data efficiency.

ConvMixers are substantially slower at inference than the competitors!

conmixer01


7) Hybrid MLP-Mixer and CovMixer

Open In Colab

Once again, we can use both the MLP-Mixer and ConvMixer in one network architecture to get the best of both worlds. Here is a simple example.

convmlpmixer


References and more information

Owner
Ibrahim Sobh
Ibrahim Sobh
Milaan Parmar / Милан пармар / _米兰 帕尔马 170 Dec 13, 2022
Official PyTorch implementation of SegFormer

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 29, 2022
Repository for Graph2Pix: A Graph-Based Image to Image Translation Framework

Graph2Pix: A Graph-Based Image to Image Translation Framework Installation Install the dependencies in env.yml $ conda env create -f env.yml $ conda a

18 Nov 17, 2022
基于GRU网络的句子判断程序/A program based on GRU network for judging sentences

SentencesJudger SentencesJudger 是一个基于GRU神经网络的句子判断程序,基本的功能是判断文章中的某一句话是否为一个优美的句子。 English 如何使用SentencesJudger 确认Python运行环境 安装pyTorch与LTP python3 -m pip

8 Mar 24, 2022
🍊 PAUSE (Positive and Annealed Unlabeled Sentence Embedding), accepted by EMNLP'2021 🌴

PAUSE: Positive and Annealed Unlabeled Sentence Embedding Sentence embedding refers to a set of effective and versatile techniques for converting raw

EQT 21 Dec 15, 2022
ChatterBot is a machine learning, conversational dialog engine for creating chat bots

ChatterBot ChatterBot is a machine-learning based conversational dialog engine build in Python which makes it possible to generate responses based on

Gunther Cox 12.8k Jan 03, 2023
Adversarial Examples for Extreme Multilabel Text Classification

Adversarial Examples for Extreme Multilabel Text Classification The code is adapted from the source codes of BERT-ATTACK [1], APLC_XLNet [2], and Atte

1 May 14, 2022
Get list of common stop words in various languages in Python

Python Stop Words Table of contents Overview Available languages Installation Basic usage Python compatibility Overview Get list of common stop words

Alireza Savand 142 Dec 21, 2022
BERT Attention Analysis

BERT Attention Analysis This repository contains code for What Does BERT Look At? An Analysis of BERT's Attention. It includes code for getting attent

Kevin Clark 401 Dec 11, 2022
Comprehensive-E2E-TTS - PyTorch Implementation

A Non-Autoregressive End-to-End Text-to-Speech (text-to-wav), supporting a family of SOTA unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultima

Keon Lee 114 Nov 13, 2022
Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet.

Sonnet finder Finds snippets in iambic pentameter in English-language text and tries to combine them to a rhyming sonnet. Usage This is a Python scrip

Marcel Bollmann 11 Sep 25, 2022
Nmt - TensorFlow Neural Machine Translation Tutorial

Neural Machine Translation (seq2seq) Tutorial Authors: Thang Luong, Eugene Brevdo, Rui Zhao (Google Research Blogpost, Github) This version of the tut

6.1k Dec 29, 2022
PyJPBoatRace: Python-based Japanese boatrace tools 🚤

pyjpboatrace :speedboat: provides you with useful tools for data analysis and auto-betting for boatrace.

5 Oct 29, 2022
LUKE -- Language Understanding with Knowledge-based Embeddings

LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transf

Studio Ousia 587 Dec 30, 2022
CMeEE 数据集医学实体抽取

医学实体抽取_GlobalPointer_torch 介绍 思想来自于苏神 GlobalPointer,原始版本是基于keras实现的,模型结构实现参考现有 pytorch 复现代码【感谢!】,基于torch百分百复现苏神原始效果。 数据集 中文医学命名实体数据集 点这里申请,很简单,共包含九类医学

85 Dec 28, 2022
Contract Understanding Atticus Dataset

Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

The Atticus Project 273 Dec 17, 2022
A design of MIDI language for music generation task, specifically for Natural Language Processing (NLP) models.

MIDI Language Introduction Reference Paper: Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions: code This

Robert Bogan Kang 3 May 25, 2022
Script to download some free japanese lessons in portuguse from NHK

Nihongo_nhk This is a script to download some free japanese lessons in portuguese from NHK. It can be executed by installing the packages with: pip in

Matheus Alves 2 Jan 06, 2022
A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

A framework for evaluating Knowledge Graph Embedding Models in a fine-grained manner.

NEC Laboratories Europe 13 Sep 08, 2022
Associated Repository for "Translation between Molecules and Natural Language"

MolT5: Translation between Molecules and Natural Language Associated repository for "Translation between Molecules and Natural Language". Table of Con

67 Dec 15, 2022