/bayesrul

Bayesian Neural Networks to predict RUL on N-CMAPSS

Primary LanguagePython

Bayesrul

This is a library for the benchmark of uncertainty quantification methods (UQ) for deep learning (DL) in the context of the Remaining Useful Life (RUL) prognostics. We experiment with heteroscedastic neural networks (HNN), deep ensembles (DE), Monte Carlo dropout (MCD) and several Bayesian Neural Networks (BNN) techniques for variance reduction such as local reparametrization trick (LRT), Flipout (FO) and Radial Bayesian networks (RAD).

The dataset we use for the benchmark (N-CMAPSS) needs to be downloaded from the NASA prognostics website. Three deep neural networks are implemented with pytorch and pytorch-lightning and turned into BNN with pyro and TyXe. Hyperparameter search is implemented with Optuna. The library computes negative log-likelihood (NLL), root mean squared error (RMSE), root mean sqaured calibration error (RMSCE) and sharpness to evaluate aspects such model accuracy and quality of predictive uncertainty estimation. Plots and csv files are generated for analysis.

The library uses poetry for dependency management and hydra for configuration management. You need to install poetry first to setup the project.

Setup

Install poetry (Linux, macOS, Windows (WSL))

curl -sSL https://install.python-poetry.org | python3 -

Clone the repository

git clone git@github.com:lbasora/bayesrul.git
cd bayesrul

Use poetry to install dependencies

poetry install

Rename file .env.example to .env and set MY_VAR to the bayesrul project folder.

Using the library

The library uses hydra library and the conf files are in bayesrul/conf/.

This library is based on the template lightning-hydra-template. Please refer to its documentation for further details on the philosophy and use.

The scripts subfolder contains examples on how to run the different functionalities. In the following, we document a few use cases.

Generate dataset lmdb files for N-CMAPSS

Make sure to have the downloaded N-CMAPSS files from NASA at data/ncmapss/

poetry run python -m bayesrul.tasks.build_ds

You can overload the options in the conf file bayesrul/conf/build_ds.yaml

Hyperparameter search

Example with a HNN:

poetry run python -m bayesrul.tasks.hpsearch hpsearch=ncmapss_hnn task_name=hps_hnn

Model training and test

poetry run python -m bayesrul.tasks.train experiment=ncmapss_hnn task_name=train_hnn

For instance, if you want to execute 5 runs of the HNN model with different seeds by exploiting hydra multirun facility:

poetry run python -m bayesrul.tasks.train experiment=ncmapss_hnn seed=1,2,3,4,5 task_name=train_hnn --multirun

For the models you wish to keep, create inside the bayesrul project folder a results subfolder to copy the model checkpoints save in the hydra output folder (e.g. /home/luis/repos/bayesrul/results/ncmapss/runs/HNN/0/checkpoints/).

Model predictions

To generate the predictions with a model checkpoint:

poetry run python -m bayesrul.tasks.predict task_name=predict_hnn ckpt_path=/home/luis/repos/bayesrul/results/ncmapss/runs/HNN/0/checkpoints/epoch_326-step_311631.ckpt

To generate the predictions for all checkpoint in the results folder:

poetry run python -m bayesrul.tasks.predict task_name=predict ckpt_path=all

Perfomance analysis

To generate metrics:

poetry run python -m bayesrul.tasks.metrics

Metrics can be generated for train, test or val datasets by setting the subset parameter in conf/metrics.yaml file.

Inside the notebooks folder, the plot_results.py file is a Visual Code notebook with examples on how to analyse model performance from the predictions and metrics generated by bayesrul.tasks.metrics.

References

The main reference to this code is:

@misc{https://doi.org/10.48550/arxiv.2302.04730,
  doi = {10.48550/ARXIV.2302.04730},  
  url = {https://arxiv.org/abs/2302.04730},  
  author = {Basora, Luis and Viens, Arthur and Chao, Manuel Arias and Olive, Xavier},  
  title = {A Benchmark on Uncertainty Quantification for Deep Learning Prognostics},  
  publisher = {arXiv},  
  year = {2023},  
}

Please read the paper for the full list of references used to implement the methods.

Paper benchmark results (model predictions) can be downloaded from figshare for further analysis. This data needs to be saved inside the bayerul folder to be accessible to bayesrul.tasks.metrics and plot_results.py.