Bellboy is a system designed for contactless interaction with public buttons. After being installed above door, elevator, and service buttons, it will enable the general public to 'press' these buttons through hover, voice, and frequent user identification interactions. This website stores the master copy of the project description and goals, and links to programs and work related to the project. The project code is divided into two repositories: embedded System code and the Services the Bellboy devices rely on.
Designed and implemented by Elma Khandaker, Sein Izumita, Shriya Gundala, Yusra Adinoyi, and Ryan Fleck. Contact us.
<iframe width="100%" height="280px" src="https://www.youtube-nocookie.com/embed/VDgH6YtpP4Y" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>Our solution has four parts. Each is unit tested and most have tools to check for security flaws, unit test coverage, and code quality.
The System repo stored at /Bellboy-Capstone/System contains the embedded python program that runs on the Bellboy prototype hardware. Read more here.
The Services repo stored at /Bellboy-Capstone/Services contains the Django-based backend to support the Bellboy systems with connections to multiple databases, and is responsible for storing long-term Bellboy usage data. Read more here.
The WebSocket repo stored at /Bellboy-Capstone/WebSocket is designed to facilitate realtime communication between bellboy devices and clients via websockets. In our working implementation, it is used to stream ephemeral logs to the frontend.
The Website repo stored at Bellboy-Capstone/Website contains the frontend, built with React, which provides to users a set of graphs and information about bellboy utilization and access to advanced tools for security staff. Connects to the two Bellboy microservices (Django and Node.JS) to gain access to Bellboy status and usage history.
The Bellboy project aims to create a contactless switch-actuation system that will augment existing public buttons for opening doors, calling elevators, activating lights, and crossing streets. Using a Rasperry Pi, cameras, ultrasonic distance sensors, and LEDs for output, the system will enable users to hover their finger, speak, or stand by the switch and be recognized to activate their target button.
Every day, millions of public buttons are used by members of the general public. Most of these inputs on doors, elevators, and walls rely on mechanical buttons for user input, which must be directly and physically actuated in order to convey the user’s intentions. Additionally, these buttons may be difficult for those hard of sight or physically disabled to use. How could we replace these interactions, removing a potential vector for disease transmission, while also increasing accessibility?
We need to provide a system where existing buttons can be pressed without physical contact between the button and the public user. Let’s design a system that can fit above already-installed public switches, and trigger them when requested by public users. People should be able to use this system without touching it, so we can use a variety of input methods to allow them to activate the buttons. This will make opening doors, selecting your elevator floor, and using other public controls much more sanitary.
The Unit must:
- Enable all users to select the floor they would like to travel to using contactless input methods
- Be easy to integrate with existing public systems, without obstructing the current door/elevator panel/button interface
To achieve these goals, Bellboy will include:
- Point-to-open hand recognition, where users may hover their finger above the button they wish to open, whereafter Bellboy will trigger the button for them.
- Vocal command recognition, where users may speak the unit number (if many units are nearby) and a command like open, close, or a floor number.
- Frequent user recognition, using facial recognition paired with nearby bluetooth radio IDs, to allow the system to predict and proactively repeat a users’ previous actions.
In addition, Bellboy will use the aforementioned features to provide:
- Command-and-monitor system, where security personnel can utilize the built-in cameras to view and provide assistance to persons utilizing Bellboy.
- User guidance system, where users may speak the location they would like to go, and Bellboy units on the premises will be able to recognize the user at each intersection and provide the next direction for them.
Both of these features will be provided by an enterprise-grade backend, as users are tracked, data is processed, and results are returned to our web GUI.
This set of requirements will be met by the Bellboy system and services.
- Be configurable to allow a technician to calibrate it for these parameters:
- Organization of buttons below the switch, the button names, and keyword triggers.
- The location of the unit within a building
- Recognize user input from a variety of methods:
- Voice commands indicating a button, direction, or location in the building to travel to.
- Hand gestures pointing at the physical buttons below the device.
- The face of the user as a method of repeating a common action or providing directions.
- Provide feedback to the user, visually and audibly, to:
- Confirm with the user that their intended action is about to, or is, being carried out.
- Provide the user with navigation information or confirm a repeated action or voice command.
- Collect data:
- Collect information from each Bellboy unit, when doors are activated, on nearby people, bluetooth radio IDs, and data associated with the method used to open the door.
- Facilitate the connection between web client and Bellboy units:
- Allow the web GUI to see all connected Bellboy units.
- Provide STUN/TURN services for the frontend to begin a WebRTC session with the Bellboy in order to provide audio/video to security staff, enabling staff to provide assistance to Bellboy users.
- Provide simplified commands based on data insights to Bellboy units:
- Actions to take when visited by a given user
- Provide a set of information to security personnel:
- Frequent accesses, including associated information about frequent users.
- The state of all switches, and whether the nearby door has been used recently.
- Enable security personnel to interact with the Bellboy units:
- Provide a method of viewing the world through the Bellboy’s front-facing camera
- Provide a method of listening and speaking through Bellboy’s audio equipment
Our group aims to successfully develop a touchless public interface for our current world alied by the ultra-scary COVID-19/SARS-2 virus.
We will also be doing research into which designs and functions would best suit the ideas we are planning to implement and consulting different literatures to support our design choices. Making sure that our design is intuitive and accessible is crucial so that end users are able to use it with ease. We will be learning how to interface our sensors with the raspberry pi and how to use the data we get from it to produce an output in our website dashboard. We want to be able to work efficiently in a group by setting deadlines and managing time. Lastly, we plan to make our design fun for ourselves as well as our end users.
ID | Description | Final Cost | Link |
---|---|---|---|
1 | Raspberry Pi 3 | $100 CAD | Link |
2 | 32GB MicroSD Card | $35 CAD | Link |
3 | Description | Final Cost | Link |
4 | Description | Final Cost | Link |
For any questions about the project, please contact the group members with the information below.
Name | GitHub | Email Address |
---|---|---|
Ryan Fleck | @RyanFleck | rflec028@uottawa.ca |
Shriya Gundala | @gshriya | sgund051@uottawa.ca |
Yusra Adinoyi | @yozohu | yadin030@uottawa.ca |
Elma Khandaker | @elmakhandaker | ekhan029@uottawa.ca |
Sein Izumita | @seinizumita | sizum075@uottawa.ca |
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.