Sensitive robot ‘thumb’ uses computer vision to ‘feel’ touch

Researchers in Germany have created an artificial thumb with an elastomer exterior and metal interior equipped with a small fish-eye lens that detects pressure – both its magnitude and direction.

A thumb-shaped sensor with a camera hidden inside is trained to infer haptic contact information.

A thumb-shaped sensor with a camera hidden inside is trained to infer haptic contact information.

A team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) have constructed “a robust, soft, low-cost, vision-based, thumb-sized 3D haptic sensor named Insight.” The soft haptic sensor uses computer vision and a deep neural network “to accurately estimate where objects come into contact with the sensor and how large the applied forces are.”

This is a significant development – robots will one day will be able to “feel” their environment as accurately as humans and animals. A news release heralds the fingertip sensor as very sensitive, robust and high resolution, just like its natural counterpart.

The scientists describe the thumb-shaped sensor in a paper published in the journal Nature Machine Intelligence: “Constructed around an internal monocular camera, the sensor has only a single layer of elastomer over-molded on a stiff frame to guarantee sensitivity, robustness, and soft contact.”

The single layer of elastomer is replaceable, while the lightweight stiff skeleton holds up the structure like a bone would if the thumb were human. Because the elastomer is mixed with dark but reflective aluminium flakes, the opaque grey shell does not let any light in.

Inside the robotic finger is a tiny 160-degree fish-eye camera that records images in colour illuminated by a ring of LEDs. When any object touches the sensor’s shell, the colour pattern inside the sensor detected by the camera changes. The camera records images multiple times per second and transmits this data to a deep neural network.

The algorithm is sensitive enough to identify even the smallest change in light in each pixel. The trained machine-learning model can notice “exactly where exactly the finger is contacting an object, determine how strong the forces are and indicate the force direction” in milliseconds.

The model, according to the researchers, interprets a force map: “it provides a force vector for every point in the three-dimensional fingertip.” 

Loading...

The researchers have prepared a video to demonstrate how the haptic finger and its precise sensor works.

“We achieved this excellent sensing performance through the innovative mechanical design of the shell, the tailored imaging system inside, automatic data collection, and cutting-edge deep learning,” says Georg Martius, Max Planck Research Group Leader at MPI-IS, where he heads the Autonomous Learning Group. 

Martius’ PhD student Huanbo Sun adds: “Our unique hybrid structure of a soft shell enclosing a stiff skeleton ensures high sensitivity and robustness. Our camera can detect even the slightest deformations of the surface from one single image.” 

The sensor was so sensitive, in fact, that when the scientists were testing it, they found that it could “feel” its own orientation relative to gravity.

Director of the Haptic Intelligence Department at MPI-IS Katherine J. Kuchenbecker, the third member of the research team, compares Insight favourably to previous attempts at making touch-sensitive robotic sensors.

“Previous soft haptic sensors had only small sensing areas, were delicate and difficult to make, and often could not feel forces parallel to the skin, which are essential for robotic manipulation like holding a glass of water or sliding a coin along a table,” she explains.

In order to “teach” the sensor, Huanbo Sun designed a testbed to create the training data needed for the machine-learning model to discern “the correlation between the change in raw image pixels and the forces applied.”

The testbed puts pressure on the sensor all around its surface and as the camera detects the changes in light, records the true contract force vector. Sun’s testbed produced about 200,000 measurements. The data, collected in nearly three weeks, was used to train the machine-learning model in one day.

Insight passed the long experiment with flying colours; having been exposed to different contact forces showed how robust the mechanical design is. Tests with a larger probe suggested that the sensing system generalises very well.

The thumb-shaped sensor is wrapped with an elastomer 4 millimetres thick, except for the ‘nail’ part which is only 1.2 millimetres thick. This tactile photoreceptor is “designed to detect even tiny forces and detailed object shapes.”

“The hardware and software design we present in our work can be transferred to a wide variety of robot parts with different shapes and precision requirements. The machine-learning architecture, training, and inference process are all general and can be applied to many other sensor designs,” Huanbo Sun concludes.

Route 6