On the Pi:
- clone this repo
- cd jester_pi
- python stream_data.py
- type in your file name to save data to
- record data for however many data points you want (maybe several hundred at the least. Keep your arm as still as possible (think of the acceleration vectors in x y and z)
- ctrl + c to exit the program and repeat as much as you want
- Now upload your files to the google colab
- run through the program and train the svm.
- Now back here, run python get_data_point.py
- Type this into the colab model.predict([...]) and see if it predicts the correct gesture!