This repository contains the code used in my master thesis on LSTM based anomaly detection for time series data. The thesis report can be downloaded from here.
Abstract
We explore the use of Long short-term memory (LSTM) for anomaly detection in temporal data. Due to the challenges in obtaining labeled anomaly datasets, an unsupervised approach is employed. We train recurrent neural networks (RNNs) with LSTM units to learn the normal time series patterns and predict future values. The resulting prediction errors are modeled to give anomaly scores. We investigate different ways of maintaining LSTM state, and the effect of using a fixed number of time steps on LSTM prediction and detection performance. LSTMs are also compared to feed-forward neural networks with fixed size time windows over inputs. Our experiments, with three real-world datasets, show that while LSTM RNNs are suitable for general purpose time series modeling and anomaly detection, maintaining LSTM state is crucial for getting desired results. Moreover, LSTMs may not be required at all for simple time series.
- Keras 2.0.3
- TensorFlow 1.0.0
- sickit-learn 0.18.2
- GPyOpt 1.0.3. (only required for hyper-parameter tuning)
-
Configutation: First set the configuration settings in configuration/config.py.
- This file has different configuration settings.
-
Use
run_config
to set parameters for the program execution like data folder, log_file etc.- Xserver: denotes if the machine has a display environment. Set it to false when running on remote machines with no display. Else the plotting comamnds will result in an error.
- experiment_id: a id used to identify different runs for example in the logs. The result plots are saved in folder: imgs/<experiment_id>.
-
Use
opt_congfig
to set parameters for optimization runs. Refer: 1 ,2 -
multi_step_lstm_config
: contains parameters specific to LSTM network
-
- This file has different configuration settings.
-
Data Pre-processing: The LSTM network needs data formatted to cuch that each input sample has look_back number of data points and each output sample has look_ahead number of time-steps. To convert the data into appropriate format and create train, test, and validation datasets python notebooks have been used. We provide notebooks for the three datasets used in the thesis which can be used as examples for new datasets. The three notebooks along with the processed dataset files are:
- ECG: notebooks/discords_ECG.ipynb, resources/data/discords/ECG/
- power_consumption: notebooks/discords_power_consumption.ipynb, resources/data/discords/dutch_power/
- machine_temperature: notebooks/NAB_machine_temp.ipynb, resources/data/nab/nab_machine_temperature/
-
Prediction Model Execution: The main LSTM models used are in the file models/lstm.py. Training the model and generating predictions two main files are provided:
- lstm_predictor.py: This file uses the default LSTM implementation by keras.
- stateful_lstm_predictor.py: uses the stateful LSTM implementation
Once the configuration setting
data_folder
has been set correctly, the code will look for train, tes and validation sets in those files. -
Anomaly Detection: Running the LSTM models which generate the predictions for train, test, and validation sets. Fr anomaly detection we need to calculate prediction errors or residuals, model them using Gaussian distribution and then set thresholds. Thse steps are again done in the notebook files specified in step 2.