/drag-and-drop

The project aims to manipulate virtual objects with the recognition of your hand.

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

header

The application of this method can be used in several areas. From being an accessibility item to even implementing technology features. The method consists of recognizing the hands so that the coordinates of the fingers can interact with virtual environments. The method is simple, so the repository is still under construction, in order to optimize and apply the idea in areas of public need.

drag-and-drop

Defining the module with its functions:

Function name Description
init Sets the start of the class, sets the directory or capture device, and sets the recording method (True/False)
Frame() Reads the function capture and plays it in a np.array
Hands() MediaPipe Hands handling and detection process
Landmarks() Find the landmarks of each finger according to the following list
Cod. Description
0 WRIST
1, 2, 3, 4 THUMB (CMC, MCP, IP, TIP)
5, 6, 7, 8 INDEX FINGER (MCP, PIP, DIP, TIP)
9, 10, 11, 12 MIDDLE FINGER (MCP, PIP, DIP, TIP)
13, 14, 15, 16 RING FINGER (MCP, PIP, DIP, TIP)
17, 18, 19, 20 PINKY (MCP, PIP, DIP, TIP)
Function name Description
Distance() Distance between the fingers
DistanceActivation() Activates a Boolean return when the distance between the specified fingers decreases.
Object() Defines the object to be manipulated and calculates the integration (fingers -> object)
ShowLandmarks() Show landmarks in video or capture.
ShowDistance() Shows the distance between the fingers
Show() Show the video
Record() Record Frames
DestroyCap() Release the capture or video device and destroys all windows generated by opencv

board