The COVID-19 pandemic demonstrated how much we rely upon healthcare professionals and other front line workers. They are working 24/7 trying to control the pandemic; we decided it's time to shift the burden. It has been incredibly difficult to get a consultation with a doctor or another health professional, either via an in-person or virtual appointment. Our product allows concerned individuals to assess their symptoms and then provides them with a medium to connect with a live doctor for further assistance. We were inspired by the format of a walk-in clinic, and aim to streamline the process for a concerned patient receiving professional advice. However, we wanted to make it as easy and efficient as possible for individuals to self-diagnose and then reach out for medical help if needed, which is why we were spurred to create a smart assistant action. Talking enables for a greater reach of accessibility and is less abstruse than programs written for the screen. Natural speech is for that very reason natural - it is the predominant method of communication and one everyone is familiar with.
It begins with the person saying "Hey Google, talk to my personal doctor". This will trigger the initiation of the app on any Android phone or Google Home device. All the interfacing is done via oral responses from the user and the Google assistant. The application proceeds to ask the user what symptoms they're experiencing and, after the user tells their assistant, it searches our database for a list of diseases that match their symptoms and reports back to the user. It also says a list of treatments for the user for relief. The app then asks the user if they would like to be connected with a live doctor or continue talking to the personal doctor app. If the user responds "Connect me to a live doctor", the assistant asks for them to say their phone number. Using the Twilio API, we send the user a text message containing a tel:[PHONE NUMBER] link. Upon clicking the link on their phone, they will be placed in a call with a doctor on the network. Our program seeks to allow patients to diagnose themselves from their own homes by using their voices and then connecting them with a live doctor on-hand. Even after the global pandemic subsides, our product will remain incredibly useful. Picture the services and flexibility of a walk-in clinic from the comfort of your own home, where all you need to do is say "Talk to my personal doctor" to get immediate medical attention. We also allowed patients to create a profile on our patient-doctor portal where they can input their medical information and a user ID. A doctor on our network can then use this portal to more accurately diagnose the patient and provide suitable next-steps. Patients also get to select a security key that allows doctors to ensure they are speaking with the right patient.
We designed the Google Home Action using Voiceflow, which is a platform that allows for the fast and flexible creation of a smart assistant application whilst abstracting away the lower-level logic and components. It allowed for the interfacing of Google Sheets - our database - and APIs. It was our team's first time using Voiceflow, and it was an incredibly fun and exciting, albeit occasionally challenging, experience. Voiceflow helped us map out, end to end, the workflow of the project, handling capturing user input and interfacing with the Google Assistant. It provides the logic required for our application, abstracting away the underlying details, and allowing us to focus on designing and implementing the algorithms. Voiceflow also made it incredibly easy to test and debug our application and was quite intuitive as an almost-no-code platform. Google Actions were arcane to our entire team prior to this hackathon, and Voiceflow allowed us to start creating immediately, transitioning our idea into a tangible product in a matter of a few hours. For the creation of the patient-doctor portal, we used Java and Netbeans to create a secure and reliable GUI. We used Git in order for multiple people to collaborate on the code base and effectively complete this half of our hack.
The link to test out the voice system is as follows : https://creator.voiceflow.com/demo/6001be180b3adc00070016743938