/Satellite_Orbit_Prediction

A new and unique approach for estimating the TLE parameters of an RSO has been developed. Developed a prediction technique that allows us to more accurately estimate the satellite TLE parameters in the future.

Primary LanguagePython

Satellite Orbit Prediction Using Machine learning approach Project

Research Paper:- https://ceur-ws.org/Vol-3282/icaiw_waai_3.pdf Our knowledge of space weather and air density is limited, trajectories are only seldom obtained from noisy ground-based radar systems, and satellite operators are reluctant to disclose their maneuvering intentions. Current orbit predictions that are only based on physics-based models may not achieve the required accuracy for collision avoidance and have already resulted in satellite collisions due to the lack of information regarding the state of the space environment and resident space objects’ (RSOs’) body characteristics. Two line element sets(TLE) made accessible to the public lack any related error or correctness information. The majority of TLE prediction techniques used today fit polynomials, which cannot capture periodic properties. This paper has presented a methodology for orbital prediction using curve fitting and LSTM on historical orbital data. The proposed machine learning approaches are used with various TLE parameters where the LSTM model is trained to learn through large amounts of historical TLE data. The fitted data is synthesized and then compared with the SGP4 predictions. The two proposed methods focus on reducing prediction errors. The results of the study demonstrate that the proposed machine learning approaches can improve orbital prediction accuracy with good performance in most cases. We go into further optimization and the computing needs for using all-on-all conjunction analysis on the whole TLE collection and visualize when and where conjunction may occur, both currently and in the near future.

This project aims to be an experimental playground for using Machine learning to limit error in orbit prediction and prime contributions are listed below:

  • Using LSTM and curve fitting for making predictions of satellite’s TLE (The proposed ML approach orbit is closer to the true orbit).
  • Obtaining a comparable accuracy to the SGP4 model.
  • Train and use machine learning models to learn the error in orbit prediction models.
  • The distinctive study that we have performed to predict new data parameters in base on the data we had with comparable accuracy and limiting error in such prediction
  • The machine learning model’s projected TLEs can accurately determine the reference orbit.

Prerequisites

Python 3 (>=3.5) is required to run the code. We also recommend using virtualenv for isolated Python environments and pip for package management. Note, to create a Python 3 environment you need to run:

virtualenv .env --python=python3
source .env/bin/activate

The code also assumes that PyTorch is already installed.

Models and Datasets

Dataset:- A two-line element set (TLE) is a standardized format for describing a satellite’s orbit and trajectory. Below is an example of the International Space Station. There are 14 fields in a TLE; however, our method only needs 9 of them.The orbits of tracked RSOs around Earth are specified and updated in the US space catalog as TLEs. TLE data is readily available for satellite owners and operators at Space-Track.Org published by US strategic command (USSTRATCOM). Space Track .org

Models-LSTM:- Long short-term memory, LSTM algorithm, which is a type of Recurrent Neural Network (RNN). An input layer, a hidden layer, and an output layer make up the three layers. By constructing weight coefficients between hidden layers, LSTM solves the problem of long-distance dependence that RNNs cannot manage. It indicates that while forecasting satellite telemetry parameter time series data, LSTM can develop long-term connections between distant nodes in the time series, improving the accuracy of time series data prediction. Our intuition to use LSTM was because of its effectiveness on the time series. We have only used linear regression in some cases where data looked linear. Looking closely at the linear data points, one would notice they usually return back to a value after some time abruptly. Like a modulo operation so delinearize helps with taking a linear result and applying a modulo operation to the result.

To modify each epoch weight and the loss function efficiently, we have used the Adamax optimizer, a variant of Adam based on the infinity room . As this problem is large in terms of data parameters, the Adam optimizer is a primary algorithm for optimization to estimate lower-order moments. Training the model would continue by comparing the output with the target, calculating the error, and optimizing the weights, reiterating the process again and again. We believe that we did a thorough hyper-parameter search and optimization in order to seek the best overall parameters in order to minimize our loss on the validation set. We have also chosen an ideal number of epochs after which the accuracy stopped increasing. When determining optimized weights for the trained model, the loss function is the most important factor to consider. The MAE loss function is used as it is more robust to outliers. This fully-connected neural network learns the mapping between the input and output by connecting all neurons available, and it has 201,052,641 trainable parameter.

Models -Curve Fitting:-

The linear or non-linear curve fitting method is a mere global minimization of the weighted sum of squares. So in our study, for some parameters such as the right ascension of the node, the argument of perigree, mean anomaly, eccentricity, the weighted sum of squares, and root mean squared error is used to assess the goodness of fit. TLE data is input to either adjust a smooth and balancing model function that describes the data adequately or train a special kind of recurrent neural network that is capable of learning long-term dependencies in data.

Citations