For humans, it’s easy to predict how an object will feel by looking at it or tell what an object looks like by touching it, but this can be a big challenge for machines. Now, a new robot developed by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is attempting to do just that.
The team took a KUKA robot arm and added a tactile sensor called GelSight, which was made by Ted Adelson’s group at CSAIL. The information collected by GelSight was then fed to an AI so it could learn the relationship between visual and tactile information.
To teach the AI how to classify objects by touch, the team recorded 12,000 videos of 200 objects like fabrics, tools and household objects being touched. The videos were broken down into still photographs and the AI used this dataset to connect tactile and visual data.
“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge”, says Yunzhu Li, CSAIL Ph.D. student and lead author on a new paper about the system. “By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could enable the robot and decrease the data we might need for tasks involving manipulating and grasping objects.”
For now, the robot can only identify objects in a controlled environment. The next step is to build a larger data set so the robot can work in more diverse settings.
“Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’,” says Andrew Owens, a postdoctoral researcher at the University of California at Berkeley. “This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”