- Terminus FE1.1 USB hub board: the solution to connect four USB devicesPosted 3 months ago
- Understanding the Mechanics of 3D PrintingPosted 4 months ago
- SDS011 the Air Quality SensorPosted 5 months ago
- NIXIE STYLE LED DISPLAYPosted 9 months ago
- TOTEM: learning by experimentingPosted 9 months ago
- Google Assistant Voice Controlled Switch – NodeMCU IOT ProjePosted 9 months ago
- Water Softener Salt Level MonitorPosted 9 months ago
- Sparkly Air SensorPosted 9 months ago
- Ultra sonic distance finder with live statusPosted 10 months ago
- Windows interface to have total control over lampsPosted 10 months ago
Open-Source Code for Hand Gesture Recognition – Sign Language Translation
MediaPipe is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.
In the visualization above, the red dots represent the localized hand landmarks, and the green lines are simply connections between selected landmark pairs for visualization of the hand skeleton. The red box represents a hand rectangle that covers the entire hand, derived either from hand detection (see hand detection example) or from the pervious round of hand landmark localization using an ML model (see also model card). Hand landmark localization is performed only within the hand rectangle for computational efficiency and accuracy, and hand detection is only invoked when landmark localization could not identify hand presence in the previous iteration.
The example can also run in a mode that localizes hand landmarks in 3D (i.e., estimating an extra z coordinate):
In the visualization above, the localized hand landmarks are represented by dots in different shades, with the brighter ones denoting landmarks closer to the camera.