- 3DRAG + MOTORFISH: How to prevent step losses in 3D printerPosted 2 weeks ago
- Motorfish: The stepper does not misses a step (the Firmware)Posted 2 weeks ago
- Motorfish: The stepper does not misses a stepPosted 3 weeks ago
- “Maker Faire Rome – The European edition” returns to the Gazometro OstiensePosted 2 months ago
- Single-chip saturation meterPosted 3 months ago
- Raspberry Pi Relay BoardPosted 5 months ago
- PCB CAD, A SELECTION GUIDEPosted 5 months ago
- Automatic dispenser for catsPosted 5 months ago
- Do you have a mask? RASPBERRY Pi SEES ITPosted 6 months ago
- RFID based Attendance system using Arduino and External EEPROMPosted 6 months ago
Open-Source Code for Hand Gesture Recognition – Sign Language Translation
MediaPipe is a framework for building multimodal (eg. video, audio, any time series data) applied ML pipelines. With MediaPipe, a perception pipeline can be built as a graph of modular components, including, for instance, inference models (e.g., TensorFlow, TFLite) and media processing functions.
In the visualization above, the red dots represent the localized hand landmarks, and the green lines are simply connections between selected landmark pairs for visualization of the hand skeleton. The red box represents a hand rectangle that covers the entire hand, derived either from hand detection (see hand detection example) or from the pervious round of hand landmark localization using an ML model (see also model card). Hand landmark localization is performed only within the hand rectangle for computational efficiency and accuracy, and hand detection is only invoked when landmark localization could not identify hand presence in the previous iteration.
The example can also run in a mode that localizes hand landmarks in 3D (i.e., estimating an extra z coordinate):
In the visualization above, the localized hand landmarks are represented by dots in different shades, with the brighter ones denoting landmarks closer to the camera.