Sign4HumanityProgram

Link to our YouTube Video: https://youtu.be/ocnxru-I9xY Credit to Nicholas Renotte for showing us how to use a lot of the dependencies' features and teaching us Machine Learning and Data Collection, a topic we had not known much about before the project! https://www.youtube.com/watch?v=doDUihpj6ro&t=7538s

Thank You to Rishab for showing us how to create a UI using PySimpleGui, also something we have never tried: https://www.youtube.com/watch?v=Z2AKGjc8bqk&t=1022s

TO RUN THE FLASK TRY THESE STEPS

To access the Flask Website

  1. python3 -m venv env (Set Virtual Environment)
  2. check intrepreter (Choose one that says ‘env’)
  3. new terminal (Open a new terminal using the new env)
  4. python -m pip install --upgrade pip (check for updated pip)
  5. python -m pip install flask (check for flask installation)
  6. Type 'python -m flask run' into terminal and click popup in terminal to open

DIFFICULTIES WE ENCOUNTERED

At first, other IDE's such as VSCode or PyCharm did not work well for us. Therefore we all tried out Jupyter for the first time.

Then, we had trouble importing a lot of the dependencies, especially TensorFlow. Specific Tensorflow versions seem to only work with specific Python versions so we had to look into that a lot. Along the way, we at one point needed Anaconda to help get things working.

At one point, we tried to embed the program into a website. When that didn't work, we tried to have a button within the website that launched the code. That didn't work. Therefore, our final product has the code and website running seperately, with the website having a tutorial tab on how to start the code.

In PySimpleGui, we thought about adding a setting menu, for turning on the hand skeleton/or turning it off, whenever the user chooses. Other settings could have been added too.

Lastly, perhaps we didn't data collect enough frames, folders, etc. Perhaps it was a lighting issue. In the end, the program was decently accurate in reading our sign language moves and translating it into text, however it wasn't 100% perfect.

In the end, we worked through a lot of the problems and got a Minimal Viable Product to turn in.

THINGS WE LEARNED

To start, almost 100% of the things we did were new to us. Using Jupyter, Tensorflow ML, OpenCV camera, MediaPipe's HandRecognition Library, PySimpleGUI, and some HTML and CSS techniques. The only thing we have done before was using the Turtle :) . Almost everything was a learning process along the way, with the internet as our greatest resource. We put 2 links at the top of this Read.Me file to thank our 2 biggest helpers.

WHAT WE ENJOYED

Since we worked in a group, it was fun bouncing ideas off of each other. Sometimes someone would get a spark of an idea and we would all work and brainstorm how to implement the new ideas. When facing bugs, we also provided us each other with advice and change in logic for the code to get things working. Furthermore, we enjoyed this project because it was made up of brand new topics that we haven't learned. Finally, our project is catered to a Social Good which we believe coding should always be used for.

IN THE FUTURE

In the future, we probably will continue to work on the project after this course when we learn new things in the field of CS. We hope to be able to directly run the program through the Flask Website through a button or something similar. Furthermore, we also hope to increase the accuracy of our signs and then add more signs. HTML and CSS could always be worked on to make a better looking website. Our biggest endeavor is hopefully connecting 2 people together so that they can "message" each other using the turtle as if they were texting on their phones, except through our program and with sign language.