- makeITcircular 2024 content launched – Part of Maker Faire Rome 2024Posted 2 weeks ago
- Application For Maker Faire Rome 2024: Deadline June 20thPosted 2 months ago
- Building a 3D Digital Clock with ArduinoPosted 7 months ago
- Creating a controller for Minecraft with realistic body movements using ArduinoPosted 7 months ago
- Snowflake with ArduinoPosted 8 months ago
- Holographic Christmas TreePosted 8 months ago
- Segstick: Build Your Own Self-Balancing Vehicle in Just 2 Days with ArduinoPosted 8 months ago
- ZSWatch: An Open-Source Smartwatch Project Based on the Zephyr Operating SystemPosted 9 months ago
- What is IoT and which devices to usePosted 9 months ago
- Maker Faire Rome Unveils Thrilling “Padel Smash Future” Pavilion for Sports EnthusiastsPosted 10 months ago
Open Source Project Proposes Vision-Free Grasping With RFID and Touchscreens
Liatris is an open-source hardware and software project (led by roboticist Mark Silliman) that does away with vision completely. Instead, you can determine the identity and pose of slightly modified objects with just a touchscreen and an RFID reader. It’s simple, relatively inexpensive, and as long as you’re not trying to deal with anything new, it works impressively well.
To get around the perception problem, Liatris uses a few clever tricks. First, each object has an RFID tag attached to it with a unique identifier, so that the robot can wirelessly detect what it’s working with. Once the robot has scanned the RFID tag, it looks the identifier up in an open source, global database of objects and downloads a CAD model and a grasp pose that “defines the ideal pose for the gripper prior to grasping the object.”
So now that you know what the object is and how to grasp it, you just need to know exactly where it is and what orientation it’s in. You can’t get that sort of information very easily from an RFID tag, so this is where the touchscreen comes in: Each object is (slightly) modified with an isosceles triangle of conductive points on the base, giving the touchscreen an exact location for the object, as well as the orientation, courtesy the pointy end of the triangle. With this data, the robot can accurately visualize the CAD model of the object on the touchscreen, and as long as it knows exactly where the touchscreen is, it can then grasp the real object based solely on the model. The robot doesn’t have to “see” anything: you just need the touchscreen and a RFID reader, and a headless robot arm can grasp just about whatever you want it to.
Source: Open Source Project Proposes Vision-Free Grasping With RFID and Touchscreens – IEEE Spectrum