/NUHARIPS

Real-time Human Activity Recognition and Indoor Positioning System for the Elderly

Primary LanguagePython

REAL-TIME HUMAN ACTIVITY RECOGNITION AND INDOOR POSITIONING SYSTEM FOR THE ELDERLY

The global elderly population has increased dramatically as the number of individuals aged 60 and above reached 962 million, more than a two-fold increase from the 382 million recorded in 1980. By 2050, this number is projected to double once more, surpassing 2.1 billion individuals. As people age, their physical activity and capacity to perform daily tasks decrease, impacting both their mental and physical health. Moreover, a large percentage of elderly people living in residential care settings have dementia or other cognitive impairments. Due to the limited number of staff members compared to the number of residents in care facilities, using technology can enhance the care provided. The ability to track elderly patients with cognitive impairment or dementia can prevent wandering and getting lost. Previous research has focused on applying machine learning and deep learning models to recognize the activities of healthy and younger populations, but there has been a lack of attention given to the recognition of activities performed by the elderly. This study aims to provide assistance to elderly people by integrating Human Activity Recognition (HAR) and Indoor Positioning, monitoring patients’ activities in different indoor and outdoor environments in real-time, as well as simultaneously locating their positions in indoor environments. We propose a solution that incorporates artificial intelligence, particularly deep learning models, and is based on sensor readings collected from a smartwatch. This method aims to detect five classes of activities: walking, sitting, laying down, going upstairs, and going downstairs. Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Long-Short Term Memory (LSTM), and CNN-LSTM model are implemented and their performances are comprehensively compared. Upon evaluation, the CNN-LSTM model outperformed all the other HAR models, achieving an F1-score of 98.95%. As for the Indoor Positioning System (IPS), it is based on RSSI measurements of a BLE beacon and was implemented using machine learning classifiers including k-Nearest Neighbor (kNN), Support Vector Machine (SVM) with linear, polynomial, and RBF kernels, NuSVC, Random Forest, Decision Tree, Gradient Boosting, Gaussian Naïve Bayes, Linear Discriminant Analysis, and Quadratic Discriminant Analysis. A Voting Classifier was implemented using a majority ‘hard’ voting of the five best-performing classifiers. Cross-validation using a shuffle-split method (n = 10 times) was employed for dividing the dataset into training and testing subsets with an 80:20 split ratio. The Random Forest classifier achieved the highest mean F1-score of 84.12%, whereas the Voting Classifier achieved the second highest mean F1-score at 83.88%.