As a user, I want to use face gestures or expressions to trigger steps
Opened this issue · 0 comments
Deleted user commented
Notes
- The purpose of this story is to lay down the foundation for face, hand, and pose tracking so that they can be easily used in future stories
- The goal is to advance to the next step by making a preset expression
- Future stories will deal with customization
Criteria
- When I click on a "Step" (1) and then "Start Step" (2), I should see "Expression found" under "Library" (3) (see Fig 1)
- When I click "Expression found" a sidepanel should show with a list of preset expressions
- When I select an expression, it should be saved
- When I start a lesson the webcam should also start if the current step requires an Expression Trigger
- When the webcam is started, I should know that it is started (a webcam feed, big "webcam on" text, etc)
- Finally, when the expression is matched the step should be triggered
Fig 1
Future tasks
- Replace placeholder emoji images with Super Reality themed ones