/Eye-of-Agamotto

Description: Dr. Strange needs your hand gestures to control the multiverse. Our AI application detects hand movements for seamless magical experience. Just like Dr. Strange control the digital world with a flick!

Primary LanguageCSSMIT LicenseMIT

Eye-of-Agamotto 🔥

Agamotto

Inspiration 🚀

The Eye of Agamotto application is a testament to the power of technology and its ability to create magical experiences. With its advanced hand tracking and gesture recognition features, the app allows users to control their digital world with the wave of a hand. Just like the mystical Eye of Agamotto from the Marvel universe, this app unlocks a new level of power and control over technology. It's truly inspiring to see how technology can blur the lines between reality and fantasy, giving us the ability to interact with the digital world in ways we never thought possible. With the Eye of Agamotto app, the possibilities are truly endless!

What it does 🥑

Agamotto

The Eye of Agamotto is an AI-powered application that utilizes advanced hand tracking and gesture recognition technologies to provide users with a novel and intuitive way of interacting with technology. The application consists of two primary models, each optimized for a different platform.

The first model is optimized for website use and allows users to control various aspects of their web browsing experience using hand gestures. The AI model is integrated with the TensorFlow js hand library and a camera library to accurately track the user's hand movements. The application uses different points associated with the user's hands to create various gestures such as moving the mouse pointer, scrolling the webpage, and liking images on social media platforms. The first model provides a unique and intuitive way for users to interact with web content, improving the overall browsing experience.

The second model is optimized for use on computer systems and provides users with a hands-free way of controlling their devices. The AI model is developed using Python and integrated with the Open-Cv Python library to access the camera on the hosting device. The model is trained using the Mediapipe library to recognize hand gestures, identify fingers, and detect the distance between fingers. Users can control the mouse pointer, click action, and even automate typing of certain words using hand gestures. The second model is particularly useful for individuals who require hands-free technology or those with mobility impairments, providing them with a more accessible and intuitive way of using their computer system.

How The Eye of Agamotto came to life! 🚀

Agamotto

Our First Model (Website Optimized):

To track a user's hand movement, we used the TensorFlow js hand library. We also tracked the hand using a camera library. We created gesture functionality on our mock HTML webpages by using the different points associated with the user's hands. These gestures include using your hands to move the mouse and webpage, and in our example, we used gestures to like images on a social media platform.

Our Second Model (Optimized for Computer Systems):

Python was used to create our second model. To access the camera on the hosting device, the Open-Cv python library was used. Our AI was able to detect hands thanks to the Python Mediapipe library. We were able to create an AI module that recognises hands, identifies the fingers up, and detects the distance between fingers using the Mediapipe Mediapipe library. We were able to train our AI to associate different hand gestures with different operations using these methods. To effectively demonstrate the experience provided by our AI, we developed a messaging app that is compatible with our model.

Challenges we ran into (。_。)

Challenges

During the development of this application, we encountered several challenges. One of the primary challenges was optimizing the hand tracking and gesture recognition algorithms to work seamlessly on both the website and computer system models. Additionally, we faced challenges in training the AI to accurately recognize and associate different hand gestures with specific actions. The camera angle, lighting conditions, and background interference also posed challenges in accurately detecting hand movements. Debugging and testing the application to ensure its smooth performance across different devices and platforms was also a challenge. Despite these obstacles, we persisted and overcame these challenges to create an impressive AI-powered hand-tracking and gesture-recognition application.

What's next for Eye of Agamotto 🤖

Future

Integration with Virtual and Augmented Reality:

The application can be integrated with virtual and augmented reality platforms to provide users with an immersive and interactive experience. For example, users can use hand gestures to interact with virtual objects and control their movements.

Expansion to Mobile Devices:

The application can be expanded to mobile devices, such as smartphones and tablets, by creating a mobile application. With the increasing popularity of mobile devices, this expansion can provide a new market for the application.

Collaboration with Other AI Features:

The application can be collaborated with other AI features, such as voice recognition and natural language processing, to provide a more sophisticated user experience. For example, users can use hand gestures to control a virtual assistant that can recognize voice commands and respond accordingly.

Unleash the Power of Agamotto 🔥

Agamotto

Desktop Application(Python): After you download the file, you need to install the dependencies. You can install the dependencies by running the following commands into your terminal.

pip install openCv

pip install Mediapipe

pip install autopy

pip install numpy

pip install pynput

After you install the dependencies, you run the main.py file, and give the application access to your camera.