/PostureRecognition

A complete sitting posture recognition application using a OV5647 camera. Project carried out for the Internet of Things Course in July 2023.

Primary LanguageJupyter NotebookMIT LicenseMIT

Posture Recognition

The system consists of a tool that uses as input an image of a person sitting in a lateral position captured by any camera, in this case an OV5647 camera, and has as output a image of the detected keypoints, determined from the OpenPose 18 keypoints architecture, and the classification of the posture in correct, incorrect or indeterminate. In case the posture is determined as incorrect, the system uses different colors to show, in the output image, which body parts led the algorithm to make this classification.

System architecture

In general, the system architecture is as follows:

  • Data collection
    • Data is collected by a camera connected to a microcontroller
  • Feature extraction
    • Image preprocessing
    • Detection of keypoints
    • Calculation of angles and extraction of other information
  • Classification
    • XGBoost model
    • Bayesian optimization
    • SHAP values
  • User interface
    • Presentation of classification results

OpenPose keypoint detection architecture

  • The h5 file containing the architecture weights should be stored in "PostureServer/posture_classification/" and can be obtained from the following link
  • Weights

Screenshots of the system running (Portuguese interface)

correct incorrect-trunk incorrect-up indeterminate

Contributions

The project was made by Vandemberg Monteiro, responsible for the feature extraction and classification, Davi Queiroz, responsible for the communication between the camera and the server and between the server and the client, and Yago Oliveira, responsible for the user interface. Data collection was structured by Vandemberg Monteiro and carried out by Davi Queiroz and Yago Oliveira.