/human-activity-recognition-smartphone-sensors

Mini-project based on activity detection dataset at UCIML. Checkout branches for latest push.

Primary LanguageHTML

1. Introduction :

A modern phone comes with variety of sensors which record data for every second of their active life. The aim of Human Activity Recognition research project which built this database from the recordings of 30 subjects performing activities of daily living (ADL) while carrying a waist-mounted smartphone with embedded inertial sensors. The complete data & related papers can be accessed at : UCI ML Repository

2. Attribute Information (as it is):

For each record in the dataset it is provided:

  • Triaxial acceleration from the accelerometer (total acceleration) and the estimated body acceleration.
  • Triaxial Angular velocity from the gyroscope.
  • A 561-feature vector with time and frequency domain variables.
  • Its activity label.
  • An identifier of the subject who carried out the experiment.

3. Goal:

  • Algorithmic feature selection
  • Build a classifier for identifying user activity from the given data
  • Report our results for a non-academic audience

4. Our Approach:

  • Exploratory analysis:
  • We started with analyzing individual variables and were quickly overwhelmed by the sheer number of possible variations for combinations of these variables to look individually and actually make sense of it all

  • So, we did basic variance checks for each variable : if the variance of a given variable is very low, that variable is likely to have low impact on the output class. This is not a hard rule & we intend to use it in conjunction with other results if needed.

  • Variable Importance

  • We used variable importance from random forest model creation phase as an indicator of whether we should keep a particular variable in our model or remove it.

Variable Importance -top 5

Variable Importance -top 5

  • More about feature importance in random forest using scikit-learn here

5. Code description

  • Exploratory analysis notebook:

    • Correlation matrix
    • Variance check
    • Distribution plots
    • Detailed statistics for each variable
  • Classification model notebook:

    • We start with our raw dataset (samsungData.Rda)
    • Split this dataset into train & test (70/30)
    • Build our base model using all 561 variables
    • Select variables using variable importance scores
    • Iterate over this process till final model
    • During each iteration, we use oob score to gauge generalizability of our model
Correlation Matrix

Correlation Matrix

  • PCA notebook

  • An attempt at looking all 561 variables using principle components

Confusion matrix for the final model

Confusion matrix for the final model

Read more about Random Forests, Bootstrap Aggregating & Variable importance scores here.