To effectively detect the faulty wafers

Overview

wafer_fault_detection

Aim of the project:

In electronics, a wafer (also called a slice or substrate) is a thin slice of semiconductor, such as crystalline silicon (c-Si), used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built in and upon the wafer. The project aims to successfully identify the state of the provided wafer by classifying it between one of the two-class +1 (good, can be used as a substrate) or -1 (bad, the substrate need to be replaced). In this regard, a training dataset is provided to build a machine learning classification model, which can predict the wafer quality.

Data Description:

The columns of provided data can be classified into 3 parts: wafer name, sensor values and label. The wafer name contains the batch number of the wafer, whereas the sensor values obtained from the measurement carried out on the wafer. The label column contains two unique values +1 and -1 that identifies if the wafer is good or need to be replaced. Additionally, we also require a schema file, which contains all the relevant information about the training files such as file names, length of date value in the file name, length of time value in the file name, number of columns, name of the columns, and their datatype.

Directory creation:

All the necessary folders were created to effectively separate the files so that the end-user can get easy access to them.

Data Validation:

In this step, we matched our dataset with the provided schema file to match the file names, the number of columns it should contain, their names as well as their datatype. If the files matched with the schema values, then it is considered a good file on which we can train or predict our model, if not then the files are considered as bad and moved to the bad folder. Moreover, we also identify the columns with null values. If the whole column data is missing then we also consider the file as bad, on the contrary, if only a fraction of data in a column is missing then we initially fill it with NaN and consider it as good data.

Data Insertion in Database:

First, we create a database with the given name passed. If the database is already created, open the connection to the database. A table with the name- "train_good_raw_dt" or "pred_good_raw_dt" is created in the database, based on training or prediction, for inserting the good data files obtained from the data validation step. If the table is already present, then the new table is not created, and new files are inserted in the already present table as we want training to be done on new as well as old training files. In the end, the data in a stored database is exported as a CSV file to be used for model training.

Data Pre-processing and Model Training:

In the training section, first, the data is checked for the NaN values in the columns. If present, impute the NaN values using the KNN imputer. The column with zero standard deviation was also identified and removed as they don't give any information during model training. A prediction schema was created based on the remained dataset columns. Afterwards, the KMeans algorithm is used to create clusters in the pre-processed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using the "KneeLocator" function. The idea behind clustering is to implement different algorithms to train data in different clusters. The Kmeans model is trained over pre-processed data and the model is saved for further use in prediction. After clusters are created, we find the best model for each cluster. We are using four algorithms, "Random Forest" “K Neighbours”, “Logistic Regression” and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the best model is selected for each cluster. All the models for every cluster are saved for use in prediction. In the end, the confusion matrix of the model associated with every cluster is also saved to give a glance at the performance of the models.

Prediction:

In data prediction, first, the essential directories are created. The data validation, data insertion and data processing steps are similar to the training section. The KMeans model created during training is loaded, and clusters for the pre-processed prediction data is predicted. Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster. Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location.

Deployment:

We will be deploying the model to Heroku Cloud.

Owner
Arun Singh Babal
Engineer | Data Science Enthusiasts | Machine Learning | Deep Learning | Advanced Computer Vision.
Arun Singh Babal
This is a simple bank management system based on Python.

Python Bank Management This is a simple bank management system based on Python. It's able to do basic operations of simple bank management. Outcome: W

Md. Rakibul Islam 3 Jan 06, 2022
Python library for generating CycloneDX SBOMs

Python Library for generating CycloneDX This CycloneDX module for Python can generate valid CycloneDX bill-of-material document containing an aggregat

CycloneDX SBOM Standard 31 Dec 16, 2022
Gives you more advanced math in python.

AdvancedPythonMath Gives you more advanced math in python. Functions .simplex(args: {number}) .circ(args: {raidus}) .pytha(args: {leg_a + leg_2}) .slo

Voidy Devleoper 1 Dec 25, 2021
Easy Alias's for bash

easy-alias Easy Alias's for bash Setup Your system needs to have 'echo' which every 21st century computer has You dont need any python requirments but

Hashm 2 Jan 18, 2022
Your missing PO formatter and linter

pofmt Your missing PO formatter and linter Features Wrap msgid and msgstr with a constant max width. Can act as a pre-commit hook. Display lint errors

Frost Ming 5 Mar 22, 2022
Think DSP: Digital Signal Processing in Python, by Allen B. Downey.

ThinkDSP LaTeX source and Python code for Think DSP: Digital Signal Processing in Python, by Allen B. Downey. The premise of this book (and the other

Allen Downey 3.2k Jan 08, 2023
Badge-Link-Creater 'For more beautiful profiles.'

Badge-Link-Creater 'For more beautiful profiles.' Ready Badges Prepares the codes of the previously prepared badges for you. Note Click here for more

Mücahit Gündüz 9 Oct 19, 2022
Simple programming language built on Python.

Serial Another programming language. Built on Python. Building and running program In order to run the program on serial, unfortunately you still need

Aleksey Demchenkov 1 Dec 09, 2021
An alternative site to emplea.do due to inconsistent service of the app.

feline a agile and fast alternative to emplea.do License: MIT Settings Moved to settings. Basic Commands Setting Up Your Users To create a normal user

Codetiger 8 Nov 10, 2021
The Google Assistant on a rotary phone

Google Assistant Rotary Phone Shoutout to my dad who had this idea a year ago and I'm only now getting around to doing it. Notes This is the code used

rydercalmdown 10 Nov 04, 2022
An application for automation of the mining function in the game Alienworlds.IO

alienautomation A Python script made to automate the tidious job of mining on AlienWorlds This script: Automatically opens the browser Automatically l

anonieXdev 42 Dec 03, 2022
JHBuild is a tool designed to ease building collections of source packages, called “modules”.

JHBuild README JHBuild is a tool designed to ease building collections of source packages, called “modules”. JHBuild was originally written for buildi

GNOME Github Mirror 46 Nov 22, 2022
Reproduction repository for the MDX 2021 Hybrid Demucs model

Submission This is the submission for MDX 2021 Track A, for Track B go to the track_b branch. Submission Summary Submission ID: 151378 Submitter: defo

Alexandre Défossez 62 Dec 18, 2022
Fetch PRs from GitHub and analyze which ones are unmergeable

Set up token Generate a personal access token on GitHub. Add repo permissions. export GH_TOKEN="abcdefg" Pull PR data make Usually, GitHub doesn't h

Stefan van der Walt 1 Nov 05, 2021
A check numbers python module

Made with Python3 (C) @FayasNoushad Copyright permission under MIT License License - https://github.com/FayasNoushad/Numbers/blob/main/LICENSE Deplo

Fayas Noushad 3 Nov 28, 2021
A repository containing an introduction to Panel made to be support videos and talks.

👍 Awesome Panel - Introduction to Panel THIS REPO IS WORK IN PROGRESS. PRE-ALPHA Panel is a very powerful framework for exploratory data analysis and

Marc Skov Madsen 51 Nov 17, 2022
One line Brainfuck interpreter in Python

One line Brainfuck interpreter in Python

16 Dec 21, 2022
Painel de consulta

⚙ FullP 1.1 Instalação 💻 git clone https://github.com/gav1x/FullP.git cd FullP pip3 install -r requirements.txt python3 main.py Um pequeno

gav1x 26 Oct 11, 2022
Python client SDK designed to simplify integrations by automating key generation and certificate enrollment using Venafi machine identity services.

This open source project is community-supported. To report a problem or share an idea, use Issues; and if you have a suggestion for fixing the issue,

Venafi, Inc. 13 Sep 27, 2022
A carrot-based color palette you didn't know you needed.

A package to produce a carrot-inspired color palette for python/matplotlib. Install: pip install carrotColors Update: pip install --upgrade carrotColo

10 Sep 28, 2021