Model-based medicine and intelligent operating roomProf. Dr. Thomas Neumuth
Minimally-invasive endoscopic surgery is a well-established surgical practice. However, decoupled hand-eye-coordination, limited field-of-view and operating space as well as decreased depth perception, are demanding for both surgeon and equipment. Faced with this complex intraoperative environment, surgeons are required to train their spatial awareness and instrumentation skill from training and live operations. Since training effects on spatial cognition and orientation capabilities vary individually, the quality of laparoscopic training with physical and virtual simulators is dependent on the predisposition of trainees. The training effectiveness and a potential skill transfer to the operating room is generally not predictable.
As a consequence, the purpose of this project is the development of a novel training assistance systems that acquires a continuous multimodal representation of a trainees’ individual laparoscopic exercises to predict the current and overall training progression and, in response, provide aural and visual feedback cues. A physical simulator extended with multiple sensor components will be used to generate a knowledge base of basic bimanual laparoscopic skills. Training progression and quality, currently assessed through subjective skill questionnaires, will be extended through the introduction of objective, machine-readable metrics as a form of unbiased description of laparoscopic expertise.