- Terminus FE1.1 USB hub board: the solution to connect four USB devicesPosted 1 week ago
- Understanding the Mechanics of 3D PrintingPosted 2 months ago
- SDS011 the Air Quality SensorPosted 3 months ago
- NIXIE STYLE LED DISPLAYPosted 6 months ago
- TOTEM: learning by experimentingPosted 6 months ago
- Google Assistant Voice Controlled Switch – NodeMCU IOT ProjePosted 7 months ago
- Water Softener Salt Level MonitorPosted 7 months ago
- Sparkly Air SensorPosted 7 months ago
- Ultra sonic distance finder with live statusPosted 7 months ago
- Windows interface to have total control over lampsPosted 7 months ago
Open Source Project Proposes Vision-Free Grasping With RFID and Touchscreens
Liatris is an open-source hardware and software project (led by roboticist Mark Silliman) that does away with vision completely. Instead, you can determine the identity and pose of slightly modified objects with just a touchscreen and an RFID reader. It’s simple, relatively inexpensive, and as long as you’re not trying to deal with anything new, it works impressively well.
To get around the perception problem, Liatris uses a few clever tricks. First, each object has an RFID tag attached to it with a unique identifier, so that the robot can wirelessly detect what it’s working with. Once the robot has scanned the RFID tag, it looks the identifier up in an open source, global database of objects and downloads a CAD model and a grasp pose that “defines the ideal pose for the gripper prior to grasping the object.”
So now that you know what the object is and how to grasp it, you just need to know exactly where it is and what orientation it’s in. You can’t get that sort of information very easily from an RFID tag, so this is where the touchscreen comes in: Each object is (slightly) modified with an isosceles triangle of conductive points on the base, giving the touchscreen an exact location for the object, as well as the orientation, courtesy the pointy end of the triangle. With this data, the robot can accurately visualize the CAD model of the object on the touchscreen, and as long as it knows exactly where the touchscreen is, it can then grasp the real object based solely on the model. The robot doesn’t have to “see” anything: you just need the touchscreen and a RFID reader, and a headless robot arm can grasp just about whatever you want it to.
Source: Open Source Project Proposes Vision-Free Grasping With RFID and Touchscreens – IEEE Spectrum