This repository includes the source code used in our paper:
Yehonatan Dahan, Guy Revach, Jindrich Dunik, and Nir Shlezinger. "Uncertainty Quantification in Deep Learning Based Kalman Filters." (2023).
Various algorithms combine deep neural networks (DNNs) and Kalman filters (KFs) to learn from data to track in complex dynamics. Unlike classic KFs, DNN-based systems do not naturally provide the error covariance alongside their estimate, which is of great importance in some applications, e.g., navigation. To bridge this gap, in this work we study error covariance extraction in DNN-aided KFs. We examine three main approaches that are distinguished by the ability to associate internal features with meaningful KF quantities such as the Kalman gain (KG) and prior covariance. We identify the differences between these approaches in their requirements and their effect on the training of the system. Our numerical study demonstrates that the above approaches allow DNN-aided KFs to extract error covariance, with most accurate error prediction provided by model-based/data-driven designs.
This repository consists of following Python scripts:
Main.py
the interface for applying both training and test for the different State-Spaces presented in our paper.config.ini
configuration for runningMain.py
script. Further details about the parameters could be found inconfig.md
file.GSSFiltering/dnn.py
defines deep neural network (dnn) architectures: KalmanNet and Split-KalmanNet.GSSFiltering/filtering.py
handles the filtering algorithms for the dnns and the extended kalman filter.GSSFiltering/model.py
defines the State-Space model's parameters.GSSFiltering/tester.py
handles the testing method.GSSFiltering/trainer.py
handles the training method.
All required packages are listed in requirement.txt file.
To simply run the code, define the desired configuration in config.ini
and execute Main.py
.
- The Recurrent Kalman Network (RKN) comparison in the paper was done with respect to RKN.
- This code is based on the Split-KalmanNet code.