It takes years of intense study and a steady hand for humans to perform surgery, but robots might have an easier time picking it up with today’s AI technology.
Researchers at Johns Hopkins University (JHU) and Stanford University have taught a robot surgical system to perform a bunch of surgical tasks as capably as human doctors, simply by training it on videos of those procedures.
The team leveraged a da Vinci Surgical System for this study. It’s a robotic system that’s typically remote controlled by a surgeon with arms that manipulate instruments for tasks like dissection, suction, and cutting and sealing vessels. Systems like these give surgeons much greater control, precision, and a closer look at patients on the operating table. The latest version is estimated to cost over US$2 million, and that doesn’t include accessories, sterilizing equipment, or training.
Using a machine learning method known as imitation learning, the team trained a da Vinci Surgical System to perform three tasks involved in surgical procedures on its own: manipulating a needle, lifting body tissue, and suturing. Take a look.
Surgical Robot Transformer Demo
The surgical system not only executed these as well as a human could, it also learned to correct its own mistakes. “Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do,” said Axel Krieger, an assistant professor at JHU who co-authored a paper on the team’s findings that was presented at this week’s Conference on Robot Learning.
The researchers trained an AI model by combining imitation learning with the machine learning architecture that popular chatbots like ChatGPT are built with. However, while those chatbots are designed to work with text, this model spits out kinematics – a language used to describe motion with mathematical elements like numbers and equations – to direct the surgical system’s arms.
The model was trained using hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures.
The team believes its model could train a robot to perform any type of surgical procedure quickly, and far more easily than the traditional method of hand-coding every step required to direct a surgery robot’s actions.
According to Krieger, this could help make automated surgery a reality sooner than we could previously conceive. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days,” he said. “It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.”
That could be one of the biggest breakthroughs in the world of robot-assisted surgery in recent years. There are some automated devices around for use in complex operations, like Corindus’s CorPath system for cardiovascular procedures. However, their capabilities are typically limited to only certain steps of the surgeries they help with.
Further, Krieger pointed out that coding each step for a robotic system can be awfully slow. “Someone might spend a decade trying to model suturing,” he said. “And that’s suturing for just one type of surgery.”
Krieger also previously worked on a different approach to automating surgical tasks. In 2022, his team of researchers developed the Smart Tissue Autonomous Robot, or STAR, at JHU. Guided by a structural light–based three-dimensional endoscope and a machine learning–based tracking algorithm, the robot intricately sutured together two ends of a pig’s intestine, without human intervention.
The JHU researchers are now working on training a robot with their imitation learning method to carry out a full surgery. It’ll likely be years before we see robots fully take over for surgeons, but innovations like this one could make complex treatments safer and more accessible for patients around the globe.
Source: Johns Hopkins University