Cambridge University scientists created a 3D-printed robot hand that is able to grasp objects

A soft, 3D-printed hand was developed by researchers from the Cambridge University. The robot uses only its wrist movement and “skin” sensors to hold different objects without dropping them.

The model is designed with anthropomorphic characteristics, including five fingers and a flexible, soft skin that has 32 barometric (pressure) sensors.

Due to its human-like features, the robot hand can adapt its movements and grasp objects with different shapes and sizes, without dropping them.

The robot hand can hold an object without dropping it

Compared to other robot hands that use motors to move their fingers, this new type uses only passive (non-powered) movements, based on wrist control and soft sensors (see the movie below).

Robot hand during the learning process

The model architecture

The hand’s components were fabricated using 3D printing. For detailed parts (the bones and receptor molds) they used a Stratasys Objet500 3D printer.

To successfully grasp and manipulate objects the model has two main components:

  • A grasp planning algorithm analyzes the geometry and physical properties of the object and generates a set of parameters (joint angles or gripper) that can be used by the robot to control the gripping process.
  • An error prediction and recovery control system that monitors the grasping process. It predicts the errors in real-time and adjusts the robot parameters (the control signals, the gripper force, the grasp configuration). It uses a Long Short-Term Memory (LSTM) network.


During the training phase, the robot learned to grasp a sphere and subsequently to release it onto a plate featuring a shallow dip, allowing the ball to return to its initial position for another trial.

Error detection and recovery from passive perception

The grasp outcome was computed by the prediction network. It took the sensor data as input and provided a probability of success or failure for the grasp. 

In case of a predicted failure by the network, the wrist moved or rotated to improve the grasp.

The prediction network architecture

The prediction network was designed to perform a regression for predicting the success (0) or failure (1). It contains two LSTM layers with 60 and 15 units, respectively, and a dropout unit in between. 

  1. The first LSTM layer performs a “sequence-to-sequence” transformation.
  2. The second LSTM layer performs a “sequence-to-one” transformation (takes a sequence input and predicts a single value).
  3. The dropout unit randomly turnes off some units during training, to prevent the network from memorizing the data (overfitting).

Tests and evaluating results

The team carried out 1,240 real-world grasping tests (650 trials for training, 250 for testing),usingvarious objects like a peach, a computer mouse, and a roll of bubble wrap.

The researchers reported that the robot could successfully grasp 11 out of 14 randomly chosen objects of similar size to the sphere used in the training stage.

The network accurately and rapidly predicted the grasp outcome before complete failure, enabling the robot to learn from errors and improve its performance over time.


This soft anthropomorphic hand’s new design could be used to create low-cost robots with high degree of control.

Incorporating computer vision systems can significantly enhance the robot hand’s performance, enabling it to carry out tasks that demand fine-tuned or precise movements, such as surgical procedures, manufacturing processes, and logistics operations.

Learn more:

Other popular posts