J Neurol Surg B Skull Base 2024; 85(S 01): S1-S398
DOI: 10.1055/s-0044-1780032
Presentation Abstracts
Oral Abstracts

AI-Based Surgical Tools Detection from Endoscopic Endonasal Pituitary Videos

Margaux Masson-Forsythe
1   Surgical Data Science Collective
,
Juan Vivanco Suarez
2   University of Iowa, Iowa City, Iowa
,
Muhammad Ammar Haider
3   CMH Lahore Medical College, Pakistan
,
James K. Liu
4   Rutgers - New Jersey Medical School, Newark, New Jersey, United States
,
Daniel A. Donoho
5   Children’s National Hospital
› Author Affiliations
 
 

    Introduction: Endoscopic endonasal surgery (EES) has proven to be a safe and effective option for the treatment of sellar, suprasellar, and anterior skull base lesions. However, the learning curve of EES is steep and demands intensive training and repetition. Surgical videos are a rich source of data that has the potential to provide significant value to operative technique optimization and surgical training/education. However, the analysis of surgical videos presents significant challenges due to its human and economic resource-intensive nature. Hence, we aimed to automatically detect intraoperative surgical tools from the surgical videos using an artificial intelligence (AI) methodology that can be used to extract informative analytics from the videos.

    Methods: Twenty-seven videos of EESs were manually labeled by a team of annotators (supervised by neurosurgical scientists) identifying 12 surgical tools: Doppler, drill, freer elevator, grasper, irrigation, Rhoton curette, Rhoton dissector, rongeur, scissor, suction, surgical knife, and unknown. The resulting dataset was split into an 80% training set and a 20% validation set. The annotated videos (labels) were used to develop a YoloV8 AI model, state-of-the-art computer vision model for object detection. The model is able to identify and localize surgical tools in each video frame. Performance accuracy was assessed using the manual labels in the validation set as ground truth ([Fig. 1]).

    Zoom Image
    Fig. 1

    Results: A total of 3,000 frames from 27 pituitary tumor surgery videos were included in the analysis: 2,400 were used to train the model and 600 for the validation. The AI model performance on the validation set showed an overall precision of 77%, recall of 58%, and mean average precision (mAP) of 64%. The most accurate results were achieved when detecting and classifying the grasper (with an accuracy of 0.89), the curette (with an accuracy of 0.89), and the suction tool (with an accuracy of 0.87; [Fig. 2]).

    Zoom Image
    Fig. 2

    Conclusion: Using a relatively small dataset of 3,000 frames from 27 videos, an appropriately trained and validated AI algorithm is capable of autonomously detecting surgical tools from real operative videos with high accuracy and precision. Furthermore, it can form the basis for the additional extraction of analytics from surgical videos. By tracking tools across frames, for instance, it can quantify metrics such as tool usage time, tool motion trajectories, interaction density between tools, and tool presence. Hence, the use and implementation of AI based methodologies for surgical tool detection unlocks a paradigm of video analysis in areas including, but not limited to, surgical workflows, skill assessment, training evaluation and correlation of tool usage with patient outcomes. Further development of such models offers opportunities for data-driven insights in refining surgical techniques and enhancing neurosurgical patient outcomes.


    #

    No conflict of interest has been declared by the author(s).

    Publication History

    Article published online:
    05 February 2024

    © 2024. Thieme. All rights reserved.

    Georg Thieme Verlag KG
    Rüdigerstraße 14, 70469 Stuttgart, Germany

     
    Zoom Image
    Fig. 1
    Zoom Image
    Fig. 2