/nims

Neurological Injury Motion Sensing (NIMS) Project

Primary LanguageRMIT LicenseMIT

DOI

Decoding accelerometry for classification and prediction of critically ill patients with severe brain injury

A pilot study of the Neurological Injury Motion Sensing (NIMS) project

Bhattacharyay, S., Rattray, J., Wang, M. et al. Decoding accelerometry for classification and prediction of critically ill patients with severe brain injury. Sci Rep 11, 23654 (2021). https://doi.org/10.1038/s41598-021-02974-w

Contents

Overview

This repository contains the code underlying the article entitled Decoding accelerometry for classification and prediction of critically ill patients with severe brain injury from the Johns Hopkins University Laboratory of Computational Intensive Care Medicine. In this file, we present the abstract, to outline the motivation for the work and the findings, and then a brief description of the code with which we generate these finding and achieve this objective.

The code on this repository is commented throughout to provide a description of each step alongside the code which achieves it.

Abstract

The goal of this research is to explore quantitative motor features in critically ill patients with severe brain injury (SBI). We hypothesized that computational decoding of these features would yield important information on underlying neurological states and clinical outcomes. Using wearable microsensors placed on all extremities, we recorded 1,701 hours of continuous, high-frequency accelerometry data from a prospective cohort of patients (n = 69) admitted to the ICU with SBI. Models were trained using time-, frequency-, and wavelet-domain motion features and levels of responsiveness and outcome as labels. The two primary tasks were detection of levels of responsiveness assessed by motor sub-score of the Glasgow Coma Scale (GCSm), and prediction of functional outcome at hospital discharge measured with the Glasgow Outcome Scale–Extended (GOSE). Detection models achieved significant (AUC: 0.70 [95% CI: 0.53–0.85]) and consistent (observation windows: 12 min – 9 hours) discrimination of SBI patients capable of purposeful movement (GCSm > 4). Prediction models accurately discriminated SBI patients of upper moderate disability or better (GOSE > 5) with 2–6 hours of observation (AUC: 0.82 [95% CI: 0.75–0.90]). Results suggest that computational analysis of time series motor activity in patients with SBI yields clinically important insights on underlying neurologic states and short-term clinical outcomes.

Code

All of the code used in this work can be found in the ./scripts directory as MATLAB (.m) files, R (.R) files, or Jupyter notebooks (.ipynb). Moreover, generalised functions have been saved in the ./scripts/functions sub-directory and .py scripts used to record accelerometry data from the bedside are available in the ./scripts/accel_recording_scripts sub-directory.

In this .m script, we iterate through compiled triaxial accelerometry information from each patient, filter each axis with a high-pass (f_c = 0.2 Hz) 4th-order Butterworth filter, and extract 7 different motion features from non-overlapping 5 second windows. Outputs are saved as .csv feature tables. We also plot short examples of the accelerometry processing pipeline for Figure 1 in this script.

In this .R script, we apply our multiple missing feature imputation algorithm. In the event of totally missing recordings (n = 10/483), we impute upper extremity recordings with linear regression from ipsilateral upper extremity sensors, we impute lower extremity recordings with linear regression from contralateral upper extremity sensors, and bed sensors are imputed with sampling with replacement from the total distribution of bed sensor values. Then, the large majority of missing values were imputed with multiple, normalized time-series imputation with the Amelia II package. We create 9 imputations, each stored in a separate .csv file.

In this .R script, we correct gross-external movements by properly adjusting for the motion features calculated from the sensor placed at the foot of each patient's bed. Based on a literature-sourced threshold of SMA for human dynamic activity, we define distributions for each feature correspodning to static activity, and correct bed sensoer feature values from extremity sensors accordingly. As a result, we have 9 bed-corrected, imputed feature sets, each stored in a separate .csv file.

In this .R file, we create repeated cross-validation (5 repeats of 5-fold CV) for each tested observation window based on the available GCS observations for each observation window. We principally use the createMultiFolds function from the caret package to stratify folds by outcome labels. Folds for each observation window are stored in a .csv file in a newly created directory.

In this .R file, we train Linear Optimal Low Rank Projections (LOL) on model training sets and reduce both training and validation sets to low-dimensional spaces prior to model training. Prior to LOL, we normalize feature spaces per the distribution of feature type and sensor combinations. This enables us to use LOL coefficients to compare feature type and sensor significance.

In this .R file, we train and validate logistic regression models (GLM) for threshold-level GCSm detection, threshold-level GOSE at discharge prediction, and threshold-level GOSE at 12 months prediction. We train and evaluate models of varying observation windows and target dimensionalities. In this script, we also calculate our feature significance scores. This is equiavalent to the absolute LOL coefficient weighted by the trained linear coefficients of the corresponding logistic regression model.

In this .ipynb notebook, we calculate AUCs, ROC curves, and classification metrics for each observation window based on the validation set predictions returned by our models. We use bias-corrected bootstrapping for repeated cross-validation (Repeated BBC-CV) to calculate 95% confidence intervals for the metrics and the ROC curve. This script is programmed to perform bootstrapping in parallel on 10 cores.

In this .R file, we calculate probability calibration curves and associated calibration metrics for each observation window based on the validation set predictions returned by our models. We use bias-corrected bootstrapping for repeated cross-validation (Repeated BBC-CV) to calculate 95% confidence intervals for the metrics and the calibration curve. This script is programmed to perform bootstrapping in parallel on 10 cores.

In this .R file, we retrospectively examine predictions of Pr(GCSm > 4) in patients (n = 6) who experienced neurological transitions across this threshold to visually determine potential clinical utility of the accelerometry-based system. For each of the 6 patients, we train optimally discriminating detection models (one with a shorter observation window of 27 minutes and one with a longer observation window of 6 hours) on the remaining patient set and validate predictions on the case study patients specifically over a large, continuously overlapping observation window set. We bootstrap across imputations to produce 95% confidence intervals that account for variation due to imputation on the predictions. Then, we prepare the probability trajectories for plotting.

In this .R file, we construct manuscript tables and perform miscellaneous statistical analyses for different parts (including figures) of the manuscript and supplementary materials. In addition to the classification metrics calculated in script no. 7, we also calculate classification accuracy with repeated BBC-CV in this script.

In this .R file, we produce the figures for the manuscript and the supplementary figures. The large majority of the quantitative figures in the manuscript are produced using the ggplot package.

Citation

Bhattacharyay, S., Rattray, J., Wang, M. et al. Decoding accelerometry for classification and prediction of critically ill patients with severe brain injury. Sci Rep 11, 23654 (2021). https://doi.org/10.1038/s41598-021-02974-w