Get a grip —

Hypersensitive robot hand is eerily human in how it can feel things

Getting it to work required integrating multiple types of machine learning.

Image of robotic fingers gripping a mirrored disco ball with light reflected off it.

From bionic limbs to sentient androids, robotic entities in science fiction blur the boundaries between biology and machine. Real-life robots are far behind in comparison. While we aren’t going to reach the level of Star Trek’s Data anytime soon, there is now a robot hand with a sense of touch that is almost human.

One thing robots have not been able to achieve is a level of sensitivity and dexterity high enough to feel and handle things as humans do. Enter a robot hand developed by a team of researchers at Columbia University. (Five years ago, we covered their work back when this achievement was still a concept.)

This hand doesn’t just pick things up and put them down on command. It is so sensitive that it can actually “feel” what it is touching, and it's dextrous enough to easily change the position of its fingers so it can better hold objects, a maneuver known as "finger gaiting." It is so sensitive it can even do all this in the dark, figuring everything out by touch.

Navigating state space

“[This is] a novel method for achieving dexterous manipulation of complex objects, while simultaneously securing the object without the use of passive support surfaces,” the researchers said in a study recently posted to the preprint server arXiv.

To create this hand, the Columbia team needed to find the most effective way for it to navigate through what’s called a state space structure. Every known possible configuration of a system is called its state space. The state space structure describes how a robot is supposed to move from one step to the next within that state space. There are different machine learning methods that can train it to do this.

A common way of training a robot is known as reinforcement learning (RL). This can be thought of as the “good bot” versus “bad bot” approach. The robot’s control software is “rewarded” for accomplishing what it is supposed to and “punished” for anything it does incorrectly. It learns through trial and error until it can recognize how it is supposed to behave. Unfortunately, RL does have its drawbacks since the slightest deviation from the expected state can cause the robot to drop an object.

So the team also used sampling-based planning (SBP) algorithms to give the robot a better grip (pun intended) on its state space structure. SPB doesn’t need to go over every possible set of motions to get through a state space; instead, it randomly samples different trajectories. Every successful maneuver a robot tries with SBP is stored as a new branch added to a digital tree, which the AI can later fall back on when seeking a way to solve a problem. SBP still has its issues—it can only rely on what it has done before, and unexpected obstacles encountered in a state space can be a problem.

“[We used] the strength of both RL and SBP methods in order to train motor control policies for in-hand manipulation with finger gaiting,” the researchers said. “We aim to manipulate more difficult objects, including concave shapes, while securing them at all times without relying on support surfaces.”

Coming to its senses

For an AI, coming up with a set of directions is the easy part. It can tell the robot what to do, but most robots cannot provide much in the way of feedback. The new robot hand goes beyond that with fingers that can feel exactly what they are touching and sense the movement and location of an object. To do this, it needed another algorithm, the rapidly exploring random tree (RRT). This algorithm is behind the hand’s ability to handle more difficult objects. RRT finds the branch of the tree that is the shortest path through the state space to the state that represents an accomplished task.

This combination of algorithms ended up making this robot hand unlike any other. The researchers taught it to keep at least three fingers in contact with the object and to balance the force used by each finger in case an object started to slip or if its shape required different amounts of pressure to maintain a grip. Closed-loop control was also used to further train the hand by giving it feedback at various points throughout the process.

This robotic hand is just as dextrous in the dark as it is when it can “see” its surroundings, just like a human hand is when trying to feel around for something. This is proprioceptive sensing, which many organisms are capable of. Because the hand can have such an amazing sense of touch, it could potentially be used as a more advanced form of assistance for people who need help with certain tasks.

We’re still far from androids like Data, which can sense about anything. But we at least now have a robotic hand that is dextrous and sensitive enough to literally keep in touch.

Elizabeth Rayne is a creature who writes. Her work has appeared on SYFY WIRE, Space.com, Live Science, Grunge, Den of Geek, and Forbidden Futures. When not writing, she is either shapeshifting, drawing, or cosplaying as a character nobody has ever heard of. Follow her on Twitter @quothravenrayne.

Loading comments...

Channel Ars Technica