
Researchers at Johns Hopkins University (JHU) have made a significant breakthrough in robotic surgery by teaching a robot to perform complex medical procedures using imitation learning.
Unlike traditional methods, which require programming each individual move, the robot was trained by watching videos of human surgeons performing surgeries.
Impressively, the robot has been able to execute surgeries with precision comparable to that of human doctors.
This new approach, termed ‘imitation learning,’ brings robotic surgery closer to autonomy, where robots could potentially perform complex surgeries independently. The milestone was presented at the Conference on Robot Learning in Munich, Germany, highlighting its importance in the field of robotics and machine learning.
The robot used in this research is based on the da Vinci Surgical System, which allows surgeons to perform minimally invasive surgeries with small or no incisions. The da Vinci robot mirrors a surgeon's hand movements in real-time using multiple robotic arms controlled by the surgeon via a console. For the new project, the robot was trained using hundreds of surgical videos captured by da Vinci’s wrist cameras, enabling it to perform critical tasks like needle manipulation, suturing, and tissue lifting.
The robot's new capabilities stem from a model that incorporates a ChatGPT-like architecture, enabling it to "speak surgery" through precise mathematical movements. Remarkably, the system displayed unexpected adaptability by autonomously retrieving dropped needles, a task it was never specifically programmed to do. This breakthrough in robotic training and dexterity opens up new possibilities, with the video-based learning approach allowing robots to quickly learn and adapt to various procedures. This reduces the need for manual coding of each movement, making surgical robots more versatile and efficient.