Soft Robotics
Soft robotics is a subfield in robotics that specializes in the building and controlling of bodies with flexible and deformable materials instead of traditional rigid links. The compliant materials allow the robot to change shape either passively or actively and provide a high degree of sensitivity to external factors. This allows soft robots to perform tasks like grasping and manipulating irregular surfaces in a safer and more effective way than traditional rigid robotics. Nonetheless, due to its multidimensional deformation capabilities, proprioceptive sensing has proven to be a great challenge, and very few methods have provided reliable 3D representation using embedded sensors.
The primary objective of this research project was to develop a soft robot finger that enabled 3D shape reconstruction and force estimation using vision-based proprioception and deep learning models. The ultimate goal is to employ soft finger grippers to pick, handle, and sense fragile objects that would be challenging for conventional rigid grippers. The soft finger was designed with a constraint layer that permits the soft finger to extend along a particular curve when powered by pneumatic actuation. In addition, a data-gathering system was designed to collect training data for neural network models.
The system consists of an RGB camera that captures the interior image of the soft finger, which contains bumps in a specific pattern, and two RGBD cameras that collect the external shape/geometry information of the soft finger. The deformation of the embedded bumps combined with the data from the RGBD camera will allow the convolution neural network to assimilate internal deformations with specific 3D shapes and angles. This correlation will allow the neural network to estimate the 3D shape and angle of the soft finger under arbitrary deformations based solely on the embedded images.
The soft finger was designed with three components made out of different materials to achieve the wanted bending motion. The main body of the finger was designed with ecoFlex 0030, which allows for a 900% elongation. A rectangular cavity was designed, which provided a tight seal with an air tube on one side and a big opening on the other side for the installation of the embedded camera. On the inside, a pattern of three and two bumps was added to allow for the location of the embedded deformations. The base and the caps were made out of a slightly stronger material (Flexible 80 from Formlabs) which still allows deformation but with higher resistance. Placing a base obligates the finger to bend in that direction as it cannot deform the material as easily as it deforms itself. Finally, the exoskeleton was made of a much stiffer material (Grey Pro from Formlabs) to prevent the finger from inflating in every other direction. It was designed with two helices rotating in the opposite direction. This created a straight bending path that guided the soft finger deformations.
The finger was installed in a specifically designed experimental setup, where it was facing down free floating to be able to bend as much as possible. A white scanning powder was added to be able to detect the depth image with the RGBD cameras.
Two RGBD cameras were placed at opposite 45-degree angles to provide as much detail as possible from the soft finger. The green screen was used to eliminate any geometric noise from behind the soft finger while calibrating the two RGBD cameras and combining their depth images into one.
The experimental setup was controlled with one central computer that ran the main code to activate, take, combine and store the depth images from the RGBD camera. At the same time, it also sent a signal to a raspberry pi to activate the pneumatic pump to inflate the soft finger. On the video, on the left is the main computer running the code and on the right is the raspberry pi desktop displaying the embedded images collected, while in the middle is the soft finger bending to arbitrary positions.
A code was designed to then autonomously import the embedded images from the raspberry pi and combine them with their respective depth image. That dataset was then fed into a machine learning algorithm modified from the DeepSoRo paper from NYU assistant professor Chen Feng. This ML algorithm allows the computer to correlate the soft finger's embedded image and the RGBD camera's depth image. It then enables the computer to predict what possible 3D shape the finger has with untrained embedded images. The results can be seen in the image below. The first image represents the embedded picture of the soft finger, the second image represents the depth image taken by the RGBD camera, and the third represents the predicted 3D shape of the finger. As can be observed, the predicted 3D shape closely resembles the true RGBD image, with just under 2.3% relative error since early testing. Therefore proving that this method can be used to represent the 3D shape of a soft finger under arbitrary deformations.
Due to the incredibly high accuracy from the ML model to replicate the 3D shape of the soft finger, the project had sufficient substance, innovation and relevance to be published. Therefore a paper was presented to IEEE (Institute of Electrical and Electronics Engineers) which describes itself as "The world's largest technical professional organization dedicated to advancing technology for the benefit of humanity," and it was accepted and recently published. This is a huge recognition of the work that I did alongside my collaborators, and it leave us thrilled that our work can keep pushing the boundaries of cutting edge technology. You can find the paper here.
Thank you for reading,
Please reach out if you liked the project, more about me and my contact information is in my About me section on the top right!