Using knowledge-informed machine learning on the PRONOSTIA (FEMTO) and IMS bearing data sets. Predict remaining-useful-life (RUL).

Overview

Knowledge Informed Machine Learning using a Weibull-based Loss Function

Exploring the concept of knowledge-informed machine learning with the use of a Weibull-based loss function. Used to predict remaining useful life (RUL) on the IMS and PRONOSTIA (also called FEMTO) bearing data sets.

Open In Colab Source code arXiv

Knowledge-informed machine learning is used on the IMS and PRONOSTIA bearing data sets for remaining useful life (RUL) prediction. The knowledge is integrated into a neural network through a novel Weibull-based loss function. A thorough statistical analysis of the Weibull-based loss function is conducted, demonstrating the effectiveness of the method on the PRONOSTIA data set. However, the Weibull-based loss function is less effective on the IMS data set.

The experiment will be detailed in the Journal of Prognostics and Health Management (accepted and pending publication -- preprint here), with an extensive discussion on the results, shortcomings, and benefits analysis. The paper also gives an overview of knowledge informed machine learning as it applies to prognostics and health management (PHM).

You can replicate the work, and all figures, by following the instructions in the Setup section. Even easier: run the Colab notebook!

If you have any questions, leave a comment in the discussion, or email me ([email protected]).

Summary

In this work, we use the definition of knowledge informed machine learning from von Rueden et al. (their excellent paper is here). Here's the general taxonomy of our knowledge informed machine learning experiment:

source_rep_int

Bearing vibration data (from the frequency domain) was used as input to feed-forward neural networks. The below figure demonstrates the data as a spectrogram (a) and the spectrogram after "binning" (b). The binned data was used as input.

spectrogram

A large hyper-parameter search was conducted on neural networks. Nine different Weibull-based loss functions were tested on each unique network.

The below chart is a qualitative method of showing the effectiveness of the Weibull-based loss functions on the two data sets.

loss function percentage

We also conducted a statistical analysis of the results, as shown below.

correlation of the weibull-based loss function to results

The top performing models' RUL trends are shown below, for both the IMS and PRONOSTIA data sets.

IMS RUL  trend
PRONOSTIA RUL  trend

Setup

Tested in linux (MacOS should also work). If you run windows you'll have to do much of the environment setup and data download/preprocessing manually.

To reproduce results:

  1. Clone this repo - clone https://github.com/tvhahn/weibull-knowledge-informed-ml.git

  2. Create virtual environment. Assumes that Conda is installed.

    • Linux/MacOS: use command from the Makefile in the root directory - make create_environment
    • Windows: from root directory - conda env create -f envweibull.yml
    • HPC: make create_environment will detect HPC environment and automatically create environment from make_hpc_venv.sh. Tested on Compute Canada. Modify make_hpc_venv.sh for your own HPC cluster.
  3. Download raw data.

    • Linux/MacOS: use make download. Will automatically download to appropriate data/raw directory.
    • Windows: Manually download the the IMS and PRONOSTIA (FEMTO) data sets from NASA prognostics data repository. Put in data/raw folder.
    • HPC: use make download. Will automatically detect HPC environment.
  4. Extract raw data.

    • Linux/MacOS: use make extract. Will automatically extract to appropriate data/raw directory.
    • Windows: Manually extract data. See the Project Organization section for folder structure.
    • HPC: use make download. Will automatically detect HPC environment. Again, modify for your HPC cluster.
  5. Ensure virtual environment is activated. conda activate weibull or source ~/weibull/bin/activate

  6. From root directory of weibull-knowledge-informed-ml, run pip install -e . -- this will give the python scripts access to the src folders.

  7. Train!

    • Linux/MacOS: use make train_ims or make train_femto. Note: set constants in the makefile for changing random search parameters. Currently set as default.

    • Windows: run manually by calling the script - python train_ims or python train_femto with the appropriate arguments. For example: src/models/train_models.py --data_set femto --path_data your_data_path --proj_dir your_project_directory_path

    • HPC: use make train_ims or make train_femto. The HPC environment should be automatically detected. A SLURM script will be run for a batch job.

      • Modify the train_modify_ims_hpc.sh or train_model_femto_hpc.sh in the src/models directory to meet the needs of your HPC cluster. This should work on Compute Canada out of the box.
  8. Filter out the poorly performing models and collate the results. This will create several results files in the models/final folder.

    • Linux/MacOS: use make summarize_ims_models or make summarize_femto_models. (note: set filter boundaries in summarize_model_results.py. Will eventually modify for use with Argparse...)
    • Windows: run manually by calling the script.
    • HPC: use make summarize_ims_models or make summarize_femto_models. Again, change filter requirements in the summarize_model_results.py script.
  9. Make the figures of the data and results.

    • Linux/MacOS: use make figures_data and make figures_results. Figures will be generated and placed in the reports/figures folder.
    • Windows: run manually by calling the script.
    • HPC: use make figures_data and make figures_results

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands to reproduce work, lik `make data` or `make train_ims`
├── README.md          <- The top-level README.
├── data
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump. Downloaded from the NASA Prognostic repository.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details (nothing in here yet)
│
├── models             <- Trained models, model predictions, and model summaries
│   ├── interim        <- Intermediate models that have not analyzed. Output from the random search.
│   ├── final          <- Final models that have been filtered and summarized. Several outpu csv files as well.
│
├── notebooks          <- Jupyter notebooks used for data exploration and analysis. Of varying quality.
│   ├── scratch        <- Scratch notebooks for quick experimentation.     
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials (empty).
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── envweibull.yml    <- The Conda environment file for reproducing the analysis environment
│                        recommend using Conda).
│
├── make_hpc_venv.sh  <- Bash script to create the HPC venv. Setup for my Compute Canada cluster.
│                        Modify to suit your own HPC cluster.
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models               
│   │   └── predict_model.py
│   │
│   └── visualization  <- Scripts to create figures of the data, results, and training progress
│       ├── visualize_data.py       
│       ├── visualize_results.py     
│       └── visualize_training.py    

Future List

As noted in the paper, the best thing would be to test out Weibull-based loss functions on large, and real-world, industrial datasets. Suitable applications may include large fleets of pumps or gas turbines.

Owner
Tim
Data science. Innovation. ML practitioner.
Tim
PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Bridging the Visual Gap: Wide-Range Image Blending PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".

Chia-Ni Lu 69 Dec 20, 2022
Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)

Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology Self-Supervised Vision Transformers Learn Visual Concepts in Histopatholog

Richard Chen 95 Dec 24, 2022
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
optimization routines for hyperparameter tuning

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

Marc Claesen 398 Nov 09, 2022
Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Yam Peleg 63 Sep 21, 2022
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
Kaggle Feedback Prize - Evaluating Student Writing 15th solution

Kaggle Feedback Prize - Evaluating Student Writing 15th solution First of all, I would like to thank the excellent notebooks and discussions from http

Lingyuan Zhang 6 Mar 24, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
PyTorch implementation of EigenGAN

PyTorch Implementation of EigenGAN Train python train.py [image_folder_path] --name [experiment name] Test python test.py [ckpt path] --traverse FFH

62 Nov 12, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
A disassembler for the RP2040 Programmable I/O State-machine!

piodisasm A disassembler for the RP2040 Programmable I/O State-machine! Usage Just run piodisasm.py on a file that contains the PIO code as hex! (Such

Ghidra Ninja 29 Dec 06, 2022
Unit-Convertor - Unit Convertor Built With Python

Python Unit Converter This project can convert Weigth,length and ... units for y

Mahdis Esmaeelian 1 May 31, 2022
CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energy Management, 2020, PikaPika team

Citylearn Challenge This is the PyTorch implementation for PikaPika team, CityLearn Challenge Multi-Agent Reinforcement Learning for Intelligent Energ

bigAIdream projects 10 Oct 10, 2022