Arabic Car License Recognition. A solution to the kaggle competition Machathon 3.0.

Overview

Transformers

Arabic licence plate recognition 🚗

  • Solution to the kaggle competition Machathon 3.0.
  • Ranked in the top 6️⃣ at the final evaluation phase.
  • Check our solution now on collab!
  • Check the solution presentation

Preprocessing Pipeline

The schematic of the processor

Approach

Step1: Preprocessing Enhancments on the image.

  • Most images had bad illumination and noise
    • Morphological operations to Maximize Contrast.
    • Gaussian Blur to remove Noise.
  • Thresholding on both Value and Saturation channels.

Step2: Extracting white plate using countours.

  • Get countours and sort based on Area.
  • Polygon Approximation For noisy countours.
  • Convex hull for Concave polygons.
  • 4-Point transformation For difficult camera angles.

Now have numbers in a countor and letters in another.

Step3: Separating characters from white plate using sliding windows.

Can't use countours to get symbols in white plate since Arabic Letter may consist of multiple charachters e.g ت this may consist of 2/3 countours.

Solution

  • Tuned 2 sliding windows, one for letters' white plate, the other for numbers.
    • Variable window width
    • Window height is the white plate height, since arabic characters may consist multiple parts
  • Selecting which window
    • Must have no black pixels on the sides
    • Must have a specific range of black pixels inside
    • For each group of windows the one with max black pixels is selected

Step4: Character Recognition.

  • Training 2 model since Arabic letters and numbers are similar e.g (أ,1) (5, ه)
    • one for classifing only arabic letters.
    • one for classifying arabic numbers.

Project Organization

Scripts applied on images

./Macathon/code/
├── extract_bbx_xml.ipynb                       : Takes directory of images and their bbx data stored in an xml files, and crop the bbxs from the images.
|                                                 The xml file contains licence label(name), xmin, ymin, xmax, ymax of the bbxs in an image.    
├── extract_bbx_txt.ipynb                       : Takes directory of images and their bbx data stored in a txt files, and crop the bbxs from the images.
|                                                 The txt file corresponding to one image may consist of multiple bbxs, each corresponds to a row of xmin,ymin,xmax,ymax for that bbx.
└── crop_right_noise.ipynb                      : Crops an image with some percentage and replace with the cropped image. 

Model versions

./Macathon/code/
└── model.ipynb                      : - The preprocessing and modeling stage, Contains:
                                          - Preprocessing Functions
                                          - Training both classifers
                                          - Prediction and generating the output csv file

Data Folder

./Macathon/data/
├── challenging_images.rar                      : Contains most challenging images collected from the train data. 
├── cropped_letters.zip                         : 28 Subfolders corresponding to the 28 letter in Arabic alphabet.
|                                                 Each subfolder holds images for the letter it's named after, cropped from the train data distribution.
├── cropped_numbers.zip                         : 10 Subfolders for the 10 numbers.
|                                                 Each subfolder holds images for the number it's named after, cropped from the train data distribution.
├── machathon-3.zip                             : The uploaded data found with the kaggle competition.
└── testLetters.zip                             : 200 images labeled from the test data distribution.
                                                  Each image has a corresponding xml file holding the bbxs locations in it.

Contributors

This masterpiece was designed, and implemented by

Hossam
Hossam Saeed
Mostafa wael
Mostafa Wael
Nada Elmasry
Nada Elmasry
Noran Hany
Noran Hany
Owner
Noran Hany
Noran Hany
ShapeGlot: Learning Language for Shape Differentiation

ShapeGlot: Learning Language for Shape Differentiation Created by Panos Achlioptas, Judy Fan, Robert X.D. Hawkins, Noah D. Goodman, Leonidas J. Guibas

Panos 32 Dec 23, 2022
Unofficial JAX implementations of Deep Learning models

JAX Models Table of Contents About The Project Getting Started Prerequisites Installation Usage Contributing License Contact About The Project The JAX

107 Jan 05, 2023
Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control

Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control Official implementation of: Cooperative multi-agent reinfor

0 Nov 16, 2021
Applying PVT to Semantic Segmentation

Applying PVT to Semantic Segmentation Here, we take MMSegmentation v0.13.0 as an example, applying PVTv2 to SemanticFPN. For details see Pyramid Visio

35 Nov 30, 2022
Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking

Probabilistic Tracklet Scoring and Inpainting for Multiple Object Tracking (CVPR 2021) Pytorch implementation of the ArTIST motion model. In this repo

Fatemeh 38 Dec 12, 2022
GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms

GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms Trying to publish a new machine learning model and can't write a decent title for your pa

264 Nov 08, 2022
2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup)智能人机交互自然语言理解赛道第二名参赛解决方案

2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup) 智能人机交互自然语言理解赛道第二名解决方案 比赛网址: CCIR-Cup-智能人机交互自然语言理解 1.依赖环境: python==3.8 torch==1.7.1+cu110 numpy==1.19.2 transformers=

JinXiang 22 Oct 29, 2022
Meta Representation Transformation for Low-resource Cross-lingual Learning

MetaXL: Meta Representation Transformation for Low-resource Cross-lingual Learning This repo hosts the code for MetaXL, published at NAACL 2021. [Meta

Microsoft 36 Aug 17, 2022
Instantaneous Motion Generation for Robots and Machines.

Ruckig Instantaneous Motion Generation for Robots and Machines. Ruckig generates trajectories on-the-fly, allowing robots and machines to react instan

Berscheid 374 Dec 23, 2022
Square Root Bundle Adjustment for Large-Scale Reconstruction

RootBA: Square Root Bundle Adjustment Project Page | Paper | Poster | Video | Code Table of Contents Citation Dependencies Installing dependencies on

Nikolaus Demmel 205 Dec 20, 2022
Action Recognition for Self-Driving Cars

Action Recognition for Self-Driving Cars This repo contains the codes for the 2021 Fall semester project "Action Recognition for Self-Driving Cars" at

VITA lab at EPFL 3 Apr 07, 2022
Generalized hybrid model for mode-locked laser diodes with an extended passive cavity

GenHybridMLLmodel Generalized hybrid model for mode-locked laser diodes with an extended passive cavity This hybrid simulation strategy combines a tra

Stijn Cuyvers 3 Sep 21, 2022
System-oriented IR evaluations are limited to rather abstract understandings of real user behavior

Validating Simulations of User Query Variants This repository contains the scripts of the experiments and evaluations, simulated queries, as well as t

IR Group at Technische Hochschule Köln 2 Nov 23, 2022
FACIAL: Synthesizing Dynamic Talking Face With Implicit Attribute Learning. ICCV, 2021.

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning PyTorch implementation for the paper: FACIAL: Synthesizing Dynamic Talking

226 Jan 08, 2023
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 03, 2023
[CVPR2021] UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

UAV-Human Official repository for CVPR2021: UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicle Paper arXiv Res

129 Jan 04, 2023
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
Pytorch implementation of Compressive Transformers, from Deepmind

Compressive Transformer in Pytorch Pytorch implementation of Compressive Transformers, a variant of Transformer-XL with compressed memory for long-ran

Phil Wang 118 Dec 01, 2022