Using knowledge-informed machine learning on the PRONOSTIA (FEMTO) and IMS bearing data sets. Predict remaining-useful-life (RUL).

Overview

Knowledge Informed Machine Learning using a Weibull-based Loss Function

Exploring the concept of knowledge-informed machine learning with the use of a Weibull-based loss function. Used to predict remaining useful life (RUL) on the IMS and PRONOSTIA (also called FEMTO) bearing data sets.

Open In Colab Source code arXiv

Knowledge-informed machine learning is used on the IMS and PRONOSTIA bearing data sets for remaining useful life (RUL) prediction. The knowledge is integrated into a neural network through a novel Weibull-based loss function. A thorough statistical analysis of the Weibull-based loss function is conducted, demonstrating the effectiveness of the method on the PRONOSTIA data set. However, the Weibull-based loss function is less effective on the IMS data set.

The experiment will be detailed in the Journal of Prognostics and Health Management (accepted and pending publication -- preprint here), with an extensive discussion on the results, shortcomings, and benefits analysis. The paper also gives an overview of knowledge informed machine learning as it applies to prognostics and health management (PHM).

You can replicate the work, and all figures, by following the instructions in the Setup section. Even easier: run the Colab notebook!

If you have any questions, leave a comment in the discussion, or email me ([email protected]).

Summary

In this work, we use the definition of knowledge informed machine learning from von Rueden et al. (their excellent paper is here). Here's the general taxonomy of our knowledge informed machine learning experiment:

source_rep_int

Bearing vibration data (from the frequency domain) was used as input to feed-forward neural networks. The below figure demonstrates the data as a spectrogram (a) and the spectrogram after "binning" (b). The binned data was used as input.

spectrogram

A large hyper-parameter search was conducted on neural networks. Nine different Weibull-based loss functions were tested on each unique network.

The below chart is a qualitative method of showing the effectiveness of the Weibull-based loss functions on the two data sets.

loss function percentage

We also conducted a statistical analysis of the results, as shown below.

correlation of the weibull-based loss function to results

The top performing models' RUL trends are shown below, for both the IMS and PRONOSTIA data sets.

IMS RUL  trend
PRONOSTIA RUL  trend

Setup

Tested in linux (MacOS should also work). If you run windows you'll have to do much of the environment setup and data download/preprocessing manually.

To reproduce results:

  1. Clone this repo - clone https://github.com/tvhahn/weibull-knowledge-informed-ml.git

  2. Create virtual environment. Assumes that Conda is installed.

    • Linux/MacOS: use command from the Makefile in the root directory - make create_environment
    • Windows: from root directory - conda env create -f envweibull.yml
    • HPC: make create_environment will detect HPC environment and automatically create environment from make_hpc_venv.sh. Tested on Compute Canada. Modify make_hpc_venv.sh for your own HPC cluster.
  3. Download raw data.

    • Linux/MacOS: use make download. Will automatically download to appropriate data/raw directory.
    • Windows: Manually download the the IMS and PRONOSTIA (FEMTO) data sets from NASA prognostics data repository. Put in data/raw folder.
    • HPC: use make download. Will automatically detect HPC environment.
  4. Extract raw data.

    • Linux/MacOS: use make extract. Will automatically extract to appropriate data/raw directory.
    • Windows: Manually extract data. See the Project Organization section for folder structure.
    • HPC: use make download. Will automatically detect HPC environment. Again, modify for your HPC cluster.
  5. Ensure virtual environment is activated. conda activate weibull or source ~/weibull/bin/activate

  6. From root directory of weibull-knowledge-informed-ml, run pip install -e . -- this will give the python scripts access to the src folders.

  7. Train!

    • Linux/MacOS: use make train_ims or make train_femto. Note: set constants in the makefile for changing random search parameters. Currently set as default.

    • Windows: run manually by calling the script - python train_ims or python train_femto with the appropriate arguments. For example: src/models/train_models.py --data_set femto --path_data your_data_path --proj_dir your_project_directory_path

    • HPC: use make train_ims or make train_femto. The HPC environment should be automatically detected. A SLURM script will be run for a batch job.

      • Modify the train_modify_ims_hpc.sh or train_model_femto_hpc.sh in the src/models directory to meet the needs of your HPC cluster. This should work on Compute Canada out of the box.
  8. Filter out the poorly performing models and collate the results. This will create several results files in the models/final folder.

    • Linux/MacOS: use make summarize_ims_models or make summarize_femto_models. (note: set filter boundaries in summarize_model_results.py. Will eventually modify for use with Argparse...)
    • Windows: run manually by calling the script.
    • HPC: use make summarize_ims_models or make summarize_femto_models. Again, change filter requirements in the summarize_model_results.py script.
  9. Make the figures of the data and results.

    • Linux/MacOS: use make figures_data and make figures_results. Figures will be generated and placed in the reports/figures folder.
    • Windows: run manually by calling the script.
    • HPC: use make figures_data and make figures_results

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands to reproduce work, lik `make data` or `make train_ims`
├── README.md          <- The top-level README.
├── data
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump. Downloaded from the NASA Prognostic repository.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details (nothing in here yet)
│
├── models             <- Trained models, model predictions, and model summaries
│   ├── interim        <- Intermediate models that have not analyzed. Output from the random search.
│   ├── final          <- Final models that have been filtered and summarized. Several outpu csv files as well.
│
├── notebooks          <- Jupyter notebooks used for data exploration and analysis. Of varying quality.
│   ├── scratch        <- Scratch notebooks for quick experimentation.     
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials (empty).
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── envweibull.yml    <- The Conda environment file for reproducing the analysis environment
│                        recommend using Conda).
│
├── make_hpc_venv.sh  <- Bash script to create the HPC venv. Setup for my Compute Canada cluster.
│                        Modify to suit your own HPC cluster.
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models               
│   │   └── predict_model.py
│   │
│   └── visualization  <- Scripts to create figures of the data, results, and training progress
│       ├── visualize_data.py       
│       ├── visualize_results.py     
│       └── visualize_training.py    

Future List

As noted in the paper, the best thing would be to test out Weibull-based loss functions on large, and real-world, industrial datasets. Suitable applications may include large fleets of pumps or gas turbines.

Owner
Tim
Data science. Innovation. ML practitioner.
Tim
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

AMRBART An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv). Requirements pyt

xfbai 60 Jan 03, 2023
NasirKhusraw - The TSP solved using genetic algorithm and show TSP path overlaid on a map of the Iran provinces & their capitals.

Nasir Khusraw : Travelling Salesman Problem The TSP solved using genetic algorithm. This project show TSP path overlaid on a map of the Iran provinces

J Brave 2 Sep 01, 2022
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
Collect super-resolution related papers, data, repositories

Collect super-resolution related papers, data, repositories

WangChaofeng 1.7k Jan 03, 2023
This framework implements the data poisoning method found in the paper Adversarial Examples Make Strong Poisons

Adversarial poison generation and evaluation. This framework implements the data poisoning method found in the paper Adversarial Examples Make Strong

31 Nov 01, 2022
PoseViz – Multi-person, multi-camera 3D human pose visualization tool built using Mayavi.

PoseViz – 3D Human Pose Visualizer Multi-person, multi-camera 3D human pose visualization tool built using Mayavi. As used in MeTRAbs visualizations.

István Sárándi 79 Dec 30, 2022
Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2002.11798)

Representation Robustness Evaluations Our implementation is based on code from MadryLab's robustness package and Devon Hjelm's Deep InfoMax. For all t

Sicheng 19 Dec 07, 2022
Repo for EchoVPR: Echo State Networks for Visual Place Recognition

EchoVPR Repo for EchoVPR: Echo State Networks for Visual Place Recognition Currently under development Dirs: data: pre-collected hidden representation

Anil Ozdemir 4 Oct 04, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
PyTorch implementation for Partially View-aligned Representation Learning with Noise-robust Contrastive Loss (CVPR 2021)

2021-CVPR-MvCLN This repo contains the code and data of the following paper accepted by CVPR 2021 Partially View-aligned Representation Learning with

XLearning Group 33 Nov 01, 2022
本步态识别系统主要基于GaitSet模型进行实现

本步态识别系统主要基于GaitSet模型进行实现。在尝试部署本系统之前,建立理解GaitSet模型的网络结构、训练和推理方法。 系统的实现效果如视频所示: 演示视频 由于模型较大,部分模型文件存储在百度云盘。 链接提取码:33mb 具体部署过程 1.下载代码 2.安装requirements.txt

16 Oct 22, 2022
[ACL-IJCNLP 2021] Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

CLNER The code is for our ACL-IJCNLP 2021 paper: Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning CLNER is a

71 Dec 08, 2022
PyG (PyTorch Geometric) - A library built upon PyTorch to easily write and train Graph Neural Networks (GNNs)

PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.

PyG 16.5k Jan 08, 2023
Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach

Performance Analysis of Multi-user NOMA Wireless-Powered mMTC Networks: A Stochastic Geometry Approach Thanh Luan Nguyen, Tri Nhu Do, Georges Kaddoum

Thanh Luan Nguyen 2 Oct 10, 2022
Automatic Video Captioning Evaluation Metric --- EMScore

Automatic Video Captioning Evaluation Metric --- EMScore Overview For an illustration, EMScore can be computed as: Installation modify the encode_text

Yaya Shi 17 Nov 28, 2022
Regularizing Generative Adversarial Networks under Limited Data (CVPR 2021)

Regularizing Generative Adversarial Networks under Limited Data [Project Page][Paper] Implementation for our GAN regularization method. The proposed r

Google 148 Nov 18, 2022
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
Simple Baselines for Human Pose Estimation and Tracking

Simple Baselines for Human Pose Estimation and Tracking News Our new work High-Resolution Representations for Labeling Pixels and Regions is available

Microsoft 2.7k Jan 05, 2023
A program that can analyze videos according to the weights you select

MaskMonitor A program that can analyze videos according to the weights you select 下載 訓練完的 weight檔案 執行 MaskDetection.py 內部可更改 輸入來源(鏡頭, 影片, 圖片) 以及輸出條件(人

Patrick_star 1 Nov 07, 2021
Easy-to-use library to boost AI inference leveraging state-of-the-art optimization techniques.

NEW RELEASE How Nebullvm Works • Tutorials • Benchmarks • Installation • Get Started • Optimization Examples Discord | Website | LinkedIn | Twitter Ne

Nebuly 1.7k Dec 31, 2022