- NIXIE STYLE LED DISPLAYPosted 4 days ago
- TOTEM: learning by experimentingPosted 1 week ago
- Google Assistant Voice Controlled Switch – NodeMCU IOT ProjePosted 1 month ago
- Water Softener Salt Level MonitorPosted 1 month ago
- Sparkly Air SensorPosted 1 month ago
- Ultra sonic distance finder with live statusPosted 1 month ago
- Windows interface to have total control over lampsPosted 1 month ago
- Smart AquariumPosted 1 month ago
- Wearable ProjectionPosted 1 month ago
- Write programs with the Arduino Web EditorPosted 1 month ago
Open-Source Code for Hand Gesture Recognition – Sign Language Translation
MediaPipe is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.
In the visualization above, the red dots represent the localized hand landmarks, and the green lines are simply connections between selected landmark pairs for visualization of the hand skeleton. The red box represents a hand rectangle that covers the entire hand, derived either from hand detection (see hand detection example) or from the pervious round of hand landmark localization using an ML model (see also model card). Hand landmark localization is performed only within the hand rectangle for computational efficiency and accuracy, and hand detection is only invoked when landmark localization could not identify hand presence in the previous iteration.
The example can also run in a mode that localizes hand landmarks in 3D (i.e., estimating an extra z coordinate):
In the visualization above, the localized hand landmarks are represented by dots in different shades, with the brighter ones denoting landmarks closer to the camera.